Cancer is considered the silent pandemic of the 21st century and the second leading cause of death worldwide. The significant heterogeneity of this disease, seen across various cancer types,... Show moreCancer is considered the silent pandemic of the 21st century and the second leading cause of death worldwide. The significant heterogeneity of this disease, seen across various cancer types, individuals, and even tumor cells, makes it extremely challenging to treat effectively and safely in all patients. Personalized oncology has emerged as an efficient strategy to leverage the differences present in cancer for the selective targeting of tumor cells. This approach aims to reduce side effects while maintaining or enhancing therapeutic efficacy. However, the availability of personalized therapies is currently limited, leaving many cancer patients longing for more selective treatments. In this context, computational tools play a crucial role in exploring unresolved questions in cancer research and accelerating the discovery of new proteins that can be selectively targeted in anticancer therapies. One main advantage of using computational tools is the ability to investigate promising protein families that have been overlooked in cancer research due to experimental limitations or publication bias, such as membrane proteins. This thesis delves into the potential of computational tools in prioritizing novel targets, mutations, and drugs for use in personalized oncology, with a specific focus on membrane proteins. Show less
While it has traditionally been seen as a means of documenting an external reality or expressing an internal feeling, photography is now capable of actualizing never-existed pasts and never-lived... Show moreWhile it has traditionally been seen as a means of documenting an external reality or expressing an internal feeling, photography is now capable of actualizing never-existed pasts and never-lived experiences. Thanks to the latest photographic technologies, we can now take photos in computer games, interpolate them in extended reality platforms, or synthesize them via artificial intelligence. To account for the most recent shifts in conceptualizations of photography, this book proposes the term virtual photography as a binding theoretical framework, defined as a photography that retains the efficiency and function of real photography (made with or without a camera) while manifesting these in an unfamiliar or noncustomary form. Show less
This research focuses on creating composite biomarkers that can classify diagnoses, estimate symptom severity, and detect treatment effects using data from wearable sensors and smartphone... Show moreThis research focuses on creating composite biomarkers that can classify diagnoses, estimate symptom severity, and detect treatment effects using data from wearable sensors and smartphone applications. The thesis consists of an introduction to machine learning techniques and their use in developing biomarkers for the central nervous system; a narrative review of the relevant literature; and detailed studies on the application of these techniques in various health conditions. Specifically, the research includes observational and cross-sectional studies on facioscapulohumeral muscular dystrophy (FSHD) and major depressive disorder (MDD), demonstrating how smartphone and wearable sensor data can be used to monitor disease severity and progression. Additionally, the research identified the use of a tablet-based finger tapping task to monitor the real-time effects of antiparkinson's drugs on Parkinson's symptom severity. Key findings highlight the potential of mHealth biomarkers to provide continuous, real-time monitoring of patients, which can enhance the accuracy of clinical assessments and potentially reduce the burden on patients and healthcare systems. The thesis also addresses the challenges of variability in mHealth device data and emphasizes the need for robust validation and standardization to ensure the reliability of these biomarkers in clinical settings. Show less
This thesis focuses on data found in the field of computational drug discovery. New insight can be obtained by applying machine learning in various ways and in a variety of domains. Two studies... Show moreThis thesis focuses on data found in the field of computational drug discovery. New insight can be obtained by applying machine learning in various ways and in a variety of domains. Two studies delved into the application of proteochemometrics (PCM), a machine learning technique that can be used to find relations in protein-ligand bioactivity data and then predict using a virtual screen whether compounds that had never been tested on a particular protein, or set of proteins. With this, sets of compounds were suggested for experimental validation that were significant in a myriad of ways. Another study investigated the mutational patterns in cancer, applying a large dataset of mutation data and identifying several motifs in G protein-coupled receptors. The thesis also contains the work done on the Papyrus dataset, a large scale bioactivity dataset that focuses on standardising data for computational drug discovery and providing an out-of-the-box set that can be used in a variety of settings. Show less
This thesis introduces the concept of "physics-based inverse design", working on the notion that the physical driving forces governing functionality are inherently encoded in independently... Show moreThis thesis introduces the concept of "physics-based inverse design", working on the notion that the physical driving forces governing functionality are inherently encoded in independently parameterized energy functions, which can be resolved through the use of inverse design strategies.The thesis describes the development of EVO-MD, a Python-based implementation of the physics-based inverse design concept. EVO-MD is capable of automatically setting-up, performing, and analyzing molecular dynamics simulations, allowing for the evolutionary optimization of complex and dynamic features in peptides. Examples of such applications include the optimization of lipid composition and curvature sensors, and the development of peptides with antiviral properties. Show less
Op dinsdag 5 maart 2024 organiseerden jeugdprogramma Het Klokhuis en de ZB een interactieve Meet Up over Artificial Intelligence, oftewel AI. Dit gebeurde naar aanleiding van de vier Klokhuis... Show moreOp dinsdag 5 maart 2024 organiseerden jeugdprogramma Het Klokhuis en de ZB een interactieve Meet Up over Artificial Intelligence, oftewel AI. Dit gebeurde naar aanleiding van de vier Klokhuis-uitzendingen over dit onderwerp. Presentator Tirsa With sprak met gasten en leerlingen uit groep 6, 7 en 8 van scholen uit het hele land.Een Snapchat maken met een grappige filter, slimme stofzuigers of ChatGPT je spreekbeurt laten schrijven: Kunstmatige Intelligentie is overal. Maar wat is het precies en wat kunnen we er mee? Informaticus Maarten Lamers legde in de studio uit hoe machines zichzelf slimmer maken en wat het verschil is tussen ons menselijke brein en het computerbrein.Onderzoeker Oumaima Hajri vertelde waar we kunstmatige intelligentie allemaal tegenkomen. Ontzettend handig, maar het is volgens haar ook belangrijk om zelf kritisch te blijven nadenken. Waarom dat van belang is, legde ze uit tijdens de Meet Up.De derde gast was basisschooldocent Tim Vissers. Hij ontwikkelt ‘Futureproof’ lesmateriaal. Tim liet zien waar je Kunstmatige Intelligentie kunt tegenkomen in de klas. Tijdens de Meet Up daagde hij de leerlingen uit om zelf aan de slag te gaan met de AI-studio van Het Klokhuis. Show less
In deze aflevering richten we ons op de conferentie 'AI: de creatieve (R)evolutie in het onderwijs', een evenement gehost door het Grafisch Lyceum Rotterdam en SintLucas. Onze podcast is rijk aan... Show moreIn deze aflevering richten we ons op de conferentie 'AI: de creatieve (R)evolutie in het onderwijs', een evenement gehost door het Grafisch Lyceum Rotterdam en SintLucas. Onze podcast is rijk aan perspectieven, met bijdragen van experts, docenten en studenten over de impact van AI in het onderwijs.We beginnen met Maarten Lamers, wiens keynote ons meeneemt op een reis door de wereld van creativiteit en kunstmatige intelligentie. Hij verkent vragen zoals: Hoe functioneert AI? En is het mogelijk voor een computermodel om 'creatief' te zijn in een wereld waar AI steeds meer menselijke trekken vertoont?Daarna richten we ons op de meningen van studenten: wat is voor hen van belang? Zij benadrukken het belang voor docenten om AI te integreren, niet alleen voor vaardigheidsontwikkeling, maar ook voor het behoud van kwaliteit in het leerproces. Docenten delen op hun beurt hun verworven inzichten over AI en hoe dit hun onderwijs beïnvloedt, inclusief praktische toepassingen uit de workshops.Vervolgens gaan we in gesprek met Oscar Lepoeter, docent aan de pabo Windesheim. Hij benadrukt het belang van kritisch nadenken over de informatie die we delen en de omgang met studentgegevens, met de vraag of studenten nog wel met een schone lei kunnen starten. We sluiten de podcast af met Erno Mijland, hij zit in de organisatie van het evenement. Hoe heeft hij de dag ervaren en waarom was het zo'n succes? Show less
Muziek, koken, kunst, humor - AI kan het allemaal. Maar kan creativiteit in een computerprogramma worden vastgelegd? Hoe werken ‘creatieve’ computers? En hoe verhoudt zich dat tot creativiteit bij... Show moreMuziek, koken, kunst, humor - AI kan het allemaal. Maar kan creativiteit in een computerprogramma worden vastgelegd? Hoe werken ‘creatieve’ computers? En hoe verhoudt zich dat tot creativiteit bij mensen? In dit Luisterlab gaan informaticus Maarten Lamers, maker Zeno van den Broek en Fedor Teunisse in gesprek over de relatie tussen mens en machine.Dit programma is een inleiding bij het Soundsofmusic programma Blind van DJ en producer van experimentele elektronische jazz, Jameszoo, alias Mitchel van Dinther. Jameszoo heeft een algoritme ontwikkeld dat een disklavier bespeelt en een robot aanstuurt als non existent soloist. Blind wordt voorafgegaan door MA(N|CHINE) van Zeno van den Broek voor vier slagwerkers. Gemodificeerde bassdrums en drummachines onderzoeken de relatie tussen mens en machine in een intense visuele performance.Maarten Lamers is informaticus aan de Universiteit Leiden. Lamers doet cross-disciplinair onderzoek naar kunstmatige intelligentie, robots, biologisch-digitale hybride systemen en creativiteit in onderzoek. Zeno van den Broek is een Nederlandse componist en artiest die in zijn werk de relatie tussen mens en machine onderzoekt en op scherp stelt.Percussionist, motivator en onderwijzer Fedor Teunisse is de artistiek directeur van Slagwerk Den Haag (SDH) en van Asko|Schönberg. Hij werkt ook als docent aan het Koninklijk Conservatorium in Den Haag.Dit programma wordt georganiseerd in samenwerking met Soundsofmusic, festival voor nieuwsgierige oren. Show less
Ermolayeva, A.; Birukou, A.; Matyushenko, S.; Kochetkov, D. 2023
Artificial Intelligence (AI) is a rapidly developing field of research that attracts significant funding from both the state and industry players. Such interest is driven by a wide range of AI... Show moreArtificial Intelligence (AI) is a rapidly developing field of research that attracts significant funding from both the state and industry players. Such interest is driven by a wide range of AI technology applications in many fields. Since many AI research topics relate to computer science, where a significant share of research results are published in conference proceedings, the same applies to AI. The world leaders in artificial intelligence research are China and the United States. The authors conducted a comparative analysis of the bibliometric indicators of AI conference papers from these two countries based on Scopus data. The analysis aimed to identify conferences that receive above-average citation rates and suggest publication strategies for authors from these countries to participate in conferences that are likely to provide better dissemination of their research results. The results showed that, although Chinese researchers publish more AI papers than those from the United States, US conference papers are cited more frequently. The authors also conducted a correlation analysis of the MNCS index, which revealed no high correlation between MNCS USA vs. MNCS China, MNCS China/MNCS USA vs. MSAR, and MNCS China/MNCS USA vs. CORE ranking indicators. Show less
Hoe creatief is kunstmatige kunst? Kunnen computers creatief zijn? Met literatuurwetenschapper prof. Kiene Brillenburg Wurth en computerwetenschapper dr. Maarten Lamers.Creativiteit wordt door... Show moreHoe creatief is kunstmatige kunst? Kunnen computers creatief zijn? Met literatuurwetenschapper prof. Kiene Brillenburg Wurth en computerwetenschapper dr. Maarten Lamers.Creativiteit wordt door velen gezien als iets uniek menselijks. En dus steekt er iets als AI ineens prachtige schilderijen maakt, rapt als Drake en literaire thrillers schrijft. Voor leken is het steeds moeilijker om computerkunst van echte kunst te onderscheiden: begin dit jaar won een door AI gegenereerde foto zelfs een prestigieuze fotoprijs. Hoe houdbaar zijn onze ideeën over originaliteit en creativiteit? Is creëren slechts het samenvoegen van eerder opgedane kennis en ervaring, of gebeurt er iets nieuws, iets magisch, iets menselijks als een kunstenaar aan de slag gaat? Kan je kunst maken zonder begrip te hebben van emoties, en zonder de ervaring van een lichaam? Computerwetenschapper dr. Maarten Lamers (LEI) legt uit hoe computers kunst maken en literatuurwetenschapper prof. Kiene Brillenburg Wurth (UU) vertelt wat computers ons kunnen leren over onze eigen creativiteit. Show less
This thesis investigates how the assessment of circular economy (CE) at the macro-economic level can be facilitated and promoted. First, a study on the socio-economic environmental impacts of... Show moreThis thesis investigates how the assessment of circular economy (CE) at the macro-economic level can be facilitated and promoted. First, a study on the socio-economic environmental impacts of international agricultural supply chain is presented to better exemplify how Multi-Regional Environmental Extended Input-Output (MR EEIO) data can be used to support policy making. Then, a Python software package (pycirk) and methods for standardized and replicable CE scenarios are presented with a case study on the global environmental and socio-economic impacts CE strategies. The thesis also presents an easy to use and open-source web-based tool for CE scenario construction and analysis (RaMa-Scene). Through these studies, MR EEIO appears to be an adequate tool to assess CE scenarios. However, the implementation of CE interventions will require a variety of micro-level changes across the current international production and consumption system and in many cases more detailed data is required than what is currently available in existing MR EEIO databases. Data availability for CE assessment could be increased through the use of Computer-Aided Technologies and Artificial Intelligence methods in combination with Life Cycle Inventory modelling and MR EEIO databases, but this is only one potential way forward. In fact, the industrial ecology and circular economy communities have many opportunities ahead to improve data collection practices by leveraging digital technologies and artificial intelligence methods. However, coordination in these scientific communities is needed to ensure that the full potential of these technological developments is harvested for the benefit of a sustainable circular economy and society. Show less
AI wordt steeds menselijker. Zo lijkt AI in staat tot creatieve dingen, denk aan de schilderijen die DALL-E kan maken of de gedichten die ChatGPT schrijft. Maar creativiteit, dat is toch ons... Show moreAI wordt steeds menselijker. Zo lijkt AI in staat tot creatieve dingen, denk aan de schilderijen die DALL-E kan maken of de gedichten die ChatGPT schrijft. Maar creativiteit, dat is toch ons menselijke ding? Zijn we nog wel bijzonder als mens? Informaticus Maarten Lamers (Universiteit Leiden) legt uit of er nog wel verschil is tussen ons menselijke brein en de computer. Show less
Als gevolg van de grote technologische vooruitgang in de gezondheidszorg worden in toenemende mate gegevens verzameld tijdens de uitvoering van klinische onderzoeken. Het is evenwel essentieel om... Show moreAls gevolg van de grote technologische vooruitgang in de gezondheidszorg worden in toenemende mate gegevens verzameld tijdens de uitvoering van klinische onderzoeken. Het is evenwel essentieel om te beseffen dat gegevens op zich van weinig of geen waarde zijn. Ten behoeve van hun optimale bruikbaarheid dienen gegevens geanalyseerd, geïnterpreteerd en verwerkt te worden. Machine learning-strategieën kunnen hiertoe nuttige en adequate oplossingen bieden. Dit proefschrift bevat machine learning-benaderingen toegepast op verschillende klinische datasets. De klassieke gegevens bestaan uit elektrische signalen van het electrocardiogram (ecg) verkregen bijgezonde proefpersonen, de innovatieve gegevens zijn afkomstig vanmetingen in een rijsimulator, en de opkomende gegevens zijn afgeleid van dna-analyse van de micro-organismen die op de huid voorkomenvan patiënten met huidziekten. We toonden aan dat het aantal ECG’s van invloed was op de nauwkeurigheid van geschatte verlenging van het qt-interval voor alle ingezette qt-correctieformules. Met behulp van SHapley AdditiveexPlanations (shap)-waarden werd de impact van de individuele kenmerken op de voorspelling van fysiologische leeftijd van het hart bepaald. We maakten gebruik van machine learning voor een betere beoordeling van de rijprestaties van bestuurders die medicijnen gebruikten. Tot slot lieten we zien dat de belangrijkste micro-organismen voor discriminatie van seborrroische dermatitis – naast Cutibacterium en Staphylococcus – kwamen relatief weinig voor, waardoor men deze micro-organismen in standaardanalyses eenvoudig over het hoofd kan zien. Daarmee hebben we aangetoond dat machine learning kanworden toegepast op gegevens die zijn afgeleid van klinische onderzoeken om in een vroeg stadium het effect van medicijnen en andere interventies op te sporen en te evalueren. Show less
Sinds de opkomst van AI vraagt men zich af of computers ook creatief kunnen zijn, en steeds vaker zien we voorbeelden van zogenaamde “artificial creativity”. Computers componeren muziek, bedenken... Show moreSinds de opkomst van AI vraagt men zich af of computers ook creatief kunnen zijn, en steeds vaker zien we voorbeelden van zogenaamde “artificial creativity”. Computers componeren muziek, bedenken recepten, maken visuele kunstwerken en tappen zelfs moppen. Maar kunnen we creativiteit wel in een computerprogramma vatten? En hoe dan? Maarten Lamers bespreekt wat computers op dit vlak eigenlijk kunnen, en net zo belangrijk: wat niet. Aan de hand van klassieke en actuele voorbeelden van “creatieve” computers wordt duidelijk hoe deze systemen werken, en wat dit met AI te maken heeft. Zo wordt het eenvoudiger om te begrijpen wat er in de media over creatieve AI geschreven wordt, zonder technisch diepgaande kennis.Deze lezing is georganiseerd in samenwerking met Studium Generale en vindt plaats tijdens de Art & Science Week van het Leiden European City of Science programma. Show less
Aims Pulmonary arterial hypertension (PAH) is a rare but serious disease associated with high mortality if left untreated. This study aims to assess the prognostic cardiac magnetic resonance (CMR)... Show moreAims Pulmonary arterial hypertension (PAH) is a rare but serious disease associated with high mortality if left untreated. This study aims to assess the prognostic cardiac magnetic resonance (CMR) features in PAH using machine learning. Methods and results Seven hundred and twenty-three consecutive treatment-naive PAH patients were identified from the ASPIRE registry; 516 were included in the training, and 207 in the validation cohort. A multilinear principal component analysis (MPCA)-based machine learning approach was used to extract mortality and survival features throughout the cardiac cycle. The features were overlaid on the original imaging using thresholding and clustering of high- and low-risk of mortality prediction values. The 1-year mortality rate in the validation cohort was 10%. Univariable Cox regression analysis of the combined short-axis and four-chamber MPCA-based predictions was statistically significant (hazard ratios: 2.1, 95% CI: 1.3, 3.4, c-index = 0.70, P = 0.002). The MPCA features improved the 1-year mortality prediction of REVEAL from c-index = 0.71 to 0.76 (P ≤ 0.001). Abnormalities in the end-systolic interventricular septum and end-diastolic left ventricle indicated the highest risk of mortality.Conclusion: The MPCA-based machine learning is an explainable time-resolved approach that allows visualization of prognostic cardiac features throughout the cardiac cycle at the population level, making this approach transparent and clinically interpretable. In addition, the added prognostic value over the REVEAL risk score and CMR volumetric measurements allows for a more accurate prediction of 1-year mortality risk in PAH. Show less
Gorostiola Gonzalez, M.; Janssen, A.P.A.; IJzerman, A.P.; Heitman, L.H.; Westen, G.J.P. van 2022
The integration of machine learning and structure-based methods has proven valuable in the past as a way to prioritize targets and compounds in early drug discovery. In oncological research, these... Show moreThe integration of machine learning and structure-based methods has proven valuable in the past as a way to prioritize targets and compounds in early drug discovery. In oncological research, these methods can be highly beneficial in addressing the diversity of neoplastic diseases portrayed by the different hallmarks of cancer. Here, we review six use case scenarios for integrated computational methods, namely driver prediction, computational mutagenesis, (off)-target prediction, binding site prediction, virtual screening, and allosteric modulation analysis. We address the heterogeneity of integration approaches and individual methods, while acknowledging their current limitations and highlighting their potential to bring drugs for personalized oncological therapies to the market faster. Show less
Wall, H.E.C. van der; Hassing, G.J.; Doll, R.J.; Westen, G.J.P. van; Cohen, A.F.; Selder, J.L.; ... ; Gal, P. 2022
ObjectiveThe aim of the present study was to develop a neural network to characterize the effect of aging on the ECG in healthy volunteers. Moreover, the impact of the various ECG features on aging... Show moreObjectiveThe aim of the present study was to develop a neural network to characterize the effect of aging on the ECG in healthy volunteers. Moreover, the impact of the various ECG features on aging was evaluated.Methods & resultsA total of 6228 healthy subjects without structural heart disease were included in this study. A neural network regression model was created to predict age of the subjects based on their ECG; 577 parameters derived from a 12‑lead ECG of each subject were used to develop and validate the neural network; A tenfold cross-validation was performed, using 118 subjects for validation each fold. Using SHapley Additive exPlanations values the impact of the individual features on the prediction of age was determined. Of 6228 subjects tested, 1808 (29%) were females and mean age was 34 years, range 18-75 years. Physiologic age was estimated as a continuous variable with an average error of 6.9 ± 5.6 years (R2 = 0.72 ± 0.04). The correlation was slightly stronger for men (R2 = 0.74) than for women (R2 = 0.66). The most important features on the prediction of physiologic age were T wave morphology indices in leads V4 and V5, and P wave amplitude in leads AVR and II.ConclusionThe application of machine learning to the ECG using a neural network regression model, allows accurate estimation of physiologic cardiac age. This technique could be used to pick up subtle age-related cardiac changes, but also estimate the reversing of these age-associated effects by administered treatments. Show less
Mechanisms to control public power have been developed and shaped around human beings as decision-makers at the centre of the public administration. However, technology is radically changing how... Show moreMechanisms to control public power have been developed and shaped around human beings as decision-makers at the centre of the public administration. However, technology is radically changing how public administration is organised and reliance on Artificial Intelligence is on the rise across all sectors. While carrying the promise of an increasingly efficient administration, automating (parts of) administrative decision-making processes also poses a challenge to our human-centred systems of control of public power. This article focuses on one of these control mechanisms: the duty to give reasons under EU law, a pillar of administrative law designed to enable individuals to challenge decisions and courts to exercise their powers of review. First, it analyses whether the duty to give reasons can be meaningfully applied when EU bodies rely on AI systems to inform their decisionmaking. Secondly, it examines the added value of secondary law, in particular the data protection rules applicable to EU institutions and the draft EU Artificial Intelligence Act, in complementing and adapting the duty to give reasons to better fulfil its purpose in a (partially) automated administration. This article concludes that the duty to give reasons provides a useful starting point but leaves a number of aspects unclear. While providing important safeguards, neither EU data protection law nor the draft EU Artificial Intelligence Act currently fill these gaps. Show less
According to Chiao in his contribution to this book, the desirability of the use of AI in sentencing should be evaluated by comparing computers to the status quo ante, rather than to an unrealistic... Show moreAccording to Chiao in his contribution to this book, the desirability of the use of AI in sentencing should be evaluated by comparing computers to the status quo ante, rather than to an unrealistic, and in any case unrealized, ideal. Although we agree that changes to the legal process such as adopting algorithmic sentencing methods can be beneficial when the change is an incremental improvement over the status quo, in order to assess whether the change is an improvement, we need to know what this “ideal” is toward which improvements are aimed. Therefore, the question whether AI is better at making sentencing decisions than human judges is approached differently in this chapter. We compare human with AI judges by evaluating the extent to which they are able to make a legitimate sentencing decision: Is legitimacy better achieved by machine than by human judges? Show less
Artificial intelligence is increasingly used throughout all processes of the news cycle. AI also has untapped corrective potential. By learning to point readers to diverse, quality, and/or... Show moreArtificial intelligence is increasingly used throughout all processes of the news cycle. AI also has untapped corrective potential. By learning to point readers to diverse, quality, and/or legitimate news after exposure to ‘fake news’, ‘false narratives’, and disinformation, AI plays a powerful role in cleaning up the information ecosystem. Yet AI systems often ‘learn’ from training data that contains historical inaccuracies and biases, with results proven to embed discriminatory attitudes and behaviours. Because this training data often does not contain personal information, regulation of AI in the news production cycle is largely overlooked by legal commentators. Accordingly, this chapter lays out the risks and challenges that AI poses in both journalistic content creation and moderation, especially through machine-learning in the post-truth world. It also assesses the media’s rights and responsibilities for using AI in journalistic endeavours in light of the EU’s AI draft regulation legislative process. Show less