This thesis focuses on data found in the field of computational drug discovery. New insight can be obtained by applying machine learning in various ways and in a variety of domains. Two studies... Show moreThis thesis focuses on data found in the field of computational drug discovery. New insight can be obtained by applying machine learning in various ways and in a variety of domains. Two studies delved into the application of proteochemometrics (PCM), a machine learning technique that can be used to find relations in protein-ligand bioactivity data and then predict using a virtual screen whether compounds that had never been tested on a particular protein, or set of proteins. With this, sets of compounds were suggested for experimental validation that were significant in a myriad of ways. Another study investigated the mutational patterns in cancer, applying a large dataset of mutation data and identifying several motifs in G protein-coupled receptors. The thesis also contains the work done on the Papyrus dataset, a large scale bioactivity dataset that focuses on standardising data for computational drug discovery and providing an out-of-the-box set that can be used in a variety of settings. Show less
This thesis introduces the concept of "physics-based inverse design", working on the notion that the physical driving forces governing functionality are inherently encoded in independently... Show moreThis thesis introduces the concept of "physics-based inverse design", working on the notion that the physical driving forces governing functionality are inherently encoded in independently parameterized energy functions, which can be resolved through the use of inverse design strategies.The thesis describes the development of EVO-MD, a Python-based implementation of the physics-based inverse design concept. EVO-MD is capable of automatically setting-up, performing, and analyzing molecular dynamics simulations, allowing for the evolutionary optimization of complex and dynamic features in peptides. Examples of such applications include the optimization of lipid composition and curvature sensors, and the development of peptides with antiviral properties. Show less
Op dinsdag 5 maart 2024 organiseerden jeugdprogramma Het Klokhuis en de ZB een interactieve Meet Up over Artificial Intelligence, oftewel AI. Dit gebeurde naar aanleiding van de vier Klokhuis... Show moreOp dinsdag 5 maart 2024 organiseerden jeugdprogramma Het Klokhuis en de ZB een interactieve Meet Up over Artificial Intelligence, oftewel AI. Dit gebeurde naar aanleiding van de vier Klokhuis-uitzendingen over dit onderwerp. Presentator Tirsa With sprak met gasten en leerlingen uit groep 6, 7 en 8 van scholen uit het hele land.Een Snapchat maken met een grappige filter, slimme stofzuigers of ChatGPT je spreekbeurt laten schrijven: Kunstmatige Intelligentie is overal. Maar wat is het precies en wat kunnen we er mee? Informaticus Maarten Lamers legde in de studio uit hoe machines zichzelf slimmer maken en wat het verschil is tussen ons menselijke brein en het computerbrein.Onderzoeker Oumaima Hajri vertelde waar we kunstmatige intelligentie allemaal tegenkomen. Ontzettend handig, maar het is volgens haar ook belangrijk om zelf kritisch te blijven nadenken. Waarom dat van belang is, legde ze uit tijdens de Meet Up.De derde gast was basisschooldocent Tim Vissers. Hij ontwikkelt ‘Futureproof’ lesmateriaal. Tim liet zien waar je Kunstmatige Intelligentie kunt tegenkomen in de klas. Tijdens de Meet Up daagde hij de leerlingen uit om zelf aan de slag te gaan met de AI-studio van Het Klokhuis. Show less
In deze aflevering richten we ons op de conferentie 'AI: de creatieve (R)evolutie in het onderwijs', een evenement gehost door het Grafisch Lyceum Rotterdam en SintLucas. Onze podcast is rijk aan... Show moreIn deze aflevering richten we ons op de conferentie 'AI: de creatieve (R)evolutie in het onderwijs', een evenement gehost door het Grafisch Lyceum Rotterdam en SintLucas. Onze podcast is rijk aan perspectieven, met bijdragen van experts, docenten en studenten over de impact van AI in het onderwijs.We beginnen met Maarten Lamers, wiens keynote ons meeneemt op een reis door de wereld van creativiteit en kunstmatige intelligentie. Hij verkent vragen zoals: Hoe functioneert AI? En is het mogelijk voor een computermodel om 'creatief' te zijn in een wereld waar AI steeds meer menselijke trekken vertoont?Daarna richten we ons op de meningen van studenten: wat is voor hen van belang? Zij benadrukken het belang voor docenten om AI te integreren, niet alleen voor vaardigheidsontwikkeling, maar ook voor het behoud van kwaliteit in het leerproces. Docenten delen op hun beurt hun verworven inzichten over AI en hoe dit hun onderwijs beïnvloedt, inclusief praktische toepassingen uit de workshops.Vervolgens gaan we in gesprek met Oscar Lepoeter, docent aan de pabo Windesheim. Hij benadrukt het belang van kritisch nadenken over de informatie die we delen en de omgang met studentgegevens, met de vraag of studenten nog wel met een schone lei kunnen starten. We sluiten de podcast af met Erno Mijland, hij zit in de organisatie van het evenement. Hoe heeft hij de dag ervaren en waarom was het zo'n succes? Show less
Muziek, koken, kunst, humor - AI kan het allemaal. Maar kan creativiteit in een computerprogramma worden vastgelegd? Hoe werken ‘creatieve’ computers? En hoe verhoudt zich dat tot creativiteit bij... Show moreMuziek, koken, kunst, humor - AI kan het allemaal. Maar kan creativiteit in een computerprogramma worden vastgelegd? Hoe werken ‘creatieve’ computers? En hoe verhoudt zich dat tot creativiteit bij mensen? In dit Luisterlab gaan informaticus Maarten Lamers, maker Zeno van den Broek en Fedor Teunisse in gesprek over de relatie tussen mens en machine.Dit programma is een inleiding bij het Soundsofmusic programma Blind van DJ en producer van experimentele elektronische jazz, Jameszoo, alias Mitchel van Dinther. Jameszoo heeft een algoritme ontwikkeld dat een disklavier bespeelt en een robot aanstuurt als non existent soloist. Blind wordt voorafgegaan door MA(N|CHINE) van Zeno van den Broek voor vier slagwerkers. Gemodificeerde bassdrums en drummachines onderzoeken de relatie tussen mens en machine in een intense visuele performance.Maarten Lamers is informaticus aan de Universiteit Leiden. Lamers doet cross-disciplinair onderzoek naar kunstmatige intelligentie, robots, biologisch-digitale hybride systemen en creativiteit in onderzoek. Zeno van den Broek is een Nederlandse componist en artiest die in zijn werk de relatie tussen mens en machine onderzoekt en op scherp stelt.Percussionist, motivator en onderwijzer Fedor Teunisse is de artistiek directeur van Slagwerk Den Haag (SDH) en van Asko|Schönberg. Hij werkt ook als docent aan het Koninklijk Conservatorium in Den Haag.Dit programma wordt georganiseerd in samenwerking met Soundsofmusic, festival voor nieuwsgierige oren. Show less
Ermolayeva, A.; Birukou, A.; Matyushenko, S.; Kochetkov, D. 2023
Artificial Intelligence (AI) is a rapidly developing field of research that attracts significant funding from both the state and industry players. Such interest is driven by a wide range of AI... Show moreArtificial Intelligence (AI) is a rapidly developing field of research that attracts significant funding from both the state and industry players. Such interest is driven by a wide range of AI technology applications in many fields. Since many AI research topics relate to computer science, where a significant share of research results are published in conference proceedings, the same applies to AI. The world leaders in artificial intelligence research are China and the United States. The authors conducted a comparative analysis of the bibliometric indicators of AI conference papers from these two countries based on Scopus data. The analysis aimed to identify conferences that receive above-average citation rates and suggest publication strategies for authors from these countries to participate in conferences that are likely to provide better dissemination of their research results. The results showed that, although Chinese researchers publish more AI papers than those from the United States, US conference papers are cited more frequently. The authors also conducted a correlation analysis of the MNCS index, which revealed no high correlation between MNCS USA vs. MNCS China, MNCS China/MNCS USA vs. MSAR, and MNCS China/MNCS USA vs. CORE ranking indicators. Show less
Hoe creatief is kunstmatige kunst? Kunnen computers creatief zijn? Met literatuurwetenschapper prof. Kiene Brillenburg Wurth en computerwetenschapper dr. Maarten Lamers.Creativiteit wordt door... Show moreHoe creatief is kunstmatige kunst? Kunnen computers creatief zijn? Met literatuurwetenschapper prof. Kiene Brillenburg Wurth en computerwetenschapper dr. Maarten Lamers.Creativiteit wordt door velen gezien als iets uniek menselijks. En dus steekt er iets als AI ineens prachtige schilderijen maakt, rapt als Drake en literaire thrillers schrijft. Voor leken is het steeds moeilijker om computerkunst van echte kunst te onderscheiden: begin dit jaar won een door AI gegenereerde foto zelfs een prestigieuze fotoprijs. Hoe houdbaar zijn onze ideeën over originaliteit en creativiteit? Is creëren slechts het samenvoegen van eerder opgedane kennis en ervaring, of gebeurt er iets nieuws, iets magisch, iets menselijks als een kunstenaar aan de slag gaat? Kan je kunst maken zonder begrip te hebben van emoties, en zonder de ervaring van een lichaam? Computerwetenschapper dr. Maarten Lamers (LEI) legt uit hoe computers kunst maken en literatuurwetenschapper prof. Kiene Brillenburg Wurth (UU) vertelt wat computers ons kunnen leren over onze eigen creativiteit. Show less
This thesis investigates how the assessment of circular economy (CE) at the macro-economic level can be facilitated and promoted. First, a study on the socio-economic environmental impacts of... Show moreThis thesis investigates how the assessment of circular economy (CE) at the macro-economic level can be facilitated and promoted. First, a study on the socio-economic environmental impacts of international agricultural supply chain is presented to better exemplify how Multi-Regional Environmental Extended Input-Output (MR EEIO) data can be used to support policy making. Then, a Python software package (pycirk) and methods for standardized and replicable CE scenarios are presented with a case study on the global environmental and socio-economic impacts CE strategies. The thesis also presents an easy to use and open-source web-based tool for CE scenario construction and analysis (RaMa-Scene). Through these studies, MR EEIO appears to be an adequate tool to assess CE scenarios. However, the implementation of CE interventions will require a variety of micro-level changes across the current international production and consumption system and in many cases more detailed data is required than what is currently available in existing MR EEIO databases. Data availability for CE assessment could be increased through the use of Computer-Aided Technologies and Artificial Intelligence methods in combination with Life Cycle Inventory modelling and MR EEIO databases, but this is only one potential way forward. In fact, the industrial ecology and circular economy communities have many opportunities ahead to improve data collection practices by leveraging digital technologies and artificial intelligence methods. However, coordination in these scientific communities is needed to ensure that the full potential of these technological developments is harvested for the benefit of a sustainable circular economy and society. Show less
AI wordt steeds menselijker. Zo lijkt AI in staat tot creatieve dingen, denk aan de schilderijen die DALL-E kan maken of de gedichten die ChatGPT schrijft. Maar creativiteit, dat is toch ons... Show moreAI wordt steeds menselijker. Zo lijkt AI in staat tot creatieve dingen, denk aan de schilderijen die DALL-E kan maken of de gedichten die ChatGPT schrijft. Maar creativiteit, dat is toch ons menselijke ding? Zijn we nog wel bijzonder als mens? Informaticus Maarten Lamers (Universiteit Leiden) legt uit of er nog wel verschil is tussen ons menselijke brein en de computer. Show less
Als gevolg van de grote technologische vooruitgang in de gezondheidszorg worden in toenemende mate gegevens verzameld tijdens de uitvoering van klinische onderzoeken. Het is evenwel essentieel om... Show moreAls gevolg van de grote technologische vooruitgang in de gezondheidszorg worden in toenemende mate gegevens verzameld tijdens de uitvoering van klinische onderzoeken. Het is evenwel essentieel om te beseffen dat gegevens op zich van weinig of geen waarde zijn. Ten behoeve van hun optimale bruikbaarheid dienen gegevens geanalyseerd, geïnterpreteerd en verwerkt te worden. Machine learning-strategieën kunnen hiertoe nuttige en adequate oplossingen bieden. Dit proefschrift bevat machine learning-benaderingen toegepast op verschillende klinische datasets. De klassieke gegevens bestaan uit elektrische signalen van het electrocardiogram (ecg) verkregen bijgezonde proefpersonen, de innovatieve gegevens zijn afkomstig vanmetingen in een rijsimulator, en de opkomende gegevens zijn afgeleid van dna-analyse van de micro-organismen die op de huid voorkomenvan patiënten met huidziekten. We toonden aan dat het aantal ECG’s van invloed was op de nauwkeurigheid van geschatte verlenging van het qt-interval voor alle ingezette qt-correctieformules. Met behulp van SHapley AdditiveexPlanations (shap)-waarden werd de impact van de individuele kenmerken op de voorspelling van fysiologische leeftijd van het hart bepaald. We maakten gebruik van machine learning voor een betere beoordeling van de rijprestaties van bestuurders die medicijnen gebruikten. Tot slot lieten we zien dat de belangrijkste micro-organismen voor discriminatie van seborrroische dermatitis – naast Cutibacterium en Staphylococcus – kwamen relatief weinig voor, waardoor men deze micro-organismen in standaardanalyses eenvoudig over het hoofd kan zien. Daarmee hebben we aangetoond dat machine learning kanworden toegepast op gegevens die zijn afgeleid van klinische onderzoeken om in een vroeg stadium het effect van medicijnen en andere interventies op te sporen en te evalueren. Show less
Sinds de opkomst van AI vraagt men zich af of computers ook creatief kunnen zijn, en steeds vaker zien we voorbeelden van zogenaamde “artificial creativity”. Computers componeren muziek, bedenken... Show moreSinds de opkomst van AI vraagt men zich af of computers ook creatief kunnen zijn, en steeds vaker zien we voorbeelden van zogenaamde “artificial creativity”. Computers componeren muziek, bedenken recepten, maken visuele kunstwerken en tappen zelfs moppen. Maar kunnen we creativiteit wel in een computerprogramma vatten? En hoe dan? Maarten Lamers bespreekt wat computers op dit vlak eigenlijk kunnen, en net zo belangrijk: wat niet. Aan de hand van klassieke en actuele voorbeelden van “creatieve” computers wordt duidelijk hoe deze systemen werken, en wat dit met AI te maken heeft. Zo wordt het eenvoudiger om te begrijpen wat er in de media over creatieve AI geschreven wordt, zonder technisch diepgaande kennis.Deze lezing is georganiseerd in samenwerking met Studium Generale en vindt plaats tijdens de Art & Science Week van het Leiden European City of Science programma. Show less
Gorostiola Gonzalez, M.; Janssen, A.P.A.; IJzerman, A.P.; Heitman, L.H.; Westen, G.J.P. van 2022
The integration of machine learning and structure-based methods has proven valuable in the past as a way to prioritize targets and compounds in early drug discovery. In oncological research, these... Show moreThe integration of machine learning and structure-based methods has proven valuable in the past as a way to prioritize targets and compounds in early drug discovery. In oncological research, these methods can be highly beneficial in addressing the diversity of neoplastic diseases portrayed by the different hallmarks of cancer. Here, we review six use case scenarios for integrated computational methods, namely driver prediction, computational mutagenesis, (off)-target prediction, binding site prediction, virtual screening, and allosteric modulation analysis. We address the heterogeneity of integration approaches and individual methods, while acknowledging their current limitations and highlighting their potential to bring drugs for personalized oncological therapies to the market faster. Show less
Wall, H.E.C. van der; Hassing, G.J.; Doll, R.J.; Westen, G.J.P. van; Cohen, A.F.; Selder, J.L.; ... ; Gal, P. 2022
ObjectiveThe aim of the present study was to develop a neural network to characterize the effect of aging on the ECG in healthy volunteers. Moreover, the impact of the various ECG features on aging... Show moreObjectiveThe aim of the present study was to develop a neural network to characterize the effect of aging on the ECG in healthy volunteers. Moreover, the impact of the various ECG features on aging was evaluated.Methods & resultsA total of 6228 healthy subjects without structural heart disease were included in this study. A neural network regression model was created to predict age of the subjects based on their ECG; 577 parameters derived from a 12‑lead ECG of each subject were used to develop and validate the neural network; A tenfold cross-validation was performed, using 118 subjects for validation each fold. Using SHapley Additive exPlanations values the impact of the individual features on the prediction of age was determined. Of 6228 subjects tested, 1808 (29%) were females and mean age was 34 years, range 18-75 years. Physiologic age was estimated as a continuous variable with an average error of 6.9 ± 5.6 years (R2 = 0.72 ± 0.04). The correlation was slightly stronger for men (R2 = 0.74) than for women (R2 = 0.66). The most important features on the prediction of physiologic age were T wave morphology indices in leads V4 and V5, and P wave amplitude in leads AVR and II.ConclusionThe application of machine learning to the ECG using a neural network regression model, allows accurate estimation of physiologic cardiac age. This technique could be used to pick up subtle age-related cardiac changes, but also estimate the reversing of these age-associated effects by administered treatments. Show less
According to Chiao in his contribution to this book, the desirability of the use of AI in sentencing should be evaluated by comparing computers to the status quo ante, rather than to an unrealistic... Show moreAccording to Chiao in his contribution to this book, the desirability of the use of AI in sentencing should be evaluated by comparing computers to the status quo ante, rather than to an unrealistic, and in any case unrealized, ideal. Although we agree that changes to the legal process such as adopting algorithmic sentencing methods can be beneficial when the change is an incremental improvement over the status quo, in order to assess whether the change is an improvement, we need to know what this “ideal” is toward which improvements are aimed. Therefore, the question whether AI is better at making sentencing decisions than human judges is approached differently in this chapter. We compare human with AI judges by evaluating the extent to which they are able to make a legitimate sentencing decision: Is legitimacy better achieved by machine than by human judges? Show less
Artificial intelligence is increasingly used throughout all processes of the news cycle. AI also has untapped corrective potential. By learning to point readers to diverse, quality, and/or... Show moreArtificial intelligence is increasingly used throughout all processes of the news cycle. AI also has untapped corrective potential. By learning to point readers to diverse, quality, and/or legitimate news after exposure to ‘fake news’, ‘false narratives’, and disinformation, AI plays a powerful role in cleaning up the information ecosystem. Yet AI systems often ‘learn’ from training data that contains historical inaccuracies and biases, with results proven to embed discriminatory attitudes and behaviours. Because this training data often does not contain personal information, regulation of AI in the news production cycle is largely overlooked by legal commentators. Accordingly, this chapter lays out the risks and challenges that AI poses in both journalistic content creation and moderation, especially through machine-learning in the post-truth world. It also assesses the media’s rights and responsibilities for using AI in journalistic endeavours in light of the EU’s AI draft regulation legislative process. Show less
Algorithms have become increasingly common, and with this development, so have algorithms that approximate human speech. This has introduced new issues with which courts and legislators will have... Show moreAlgorithms have become increasingly common, and with this development, so have algorithms that approximate human speech. This has introduced new issues with which courts and legislators will have to grapple. Courts in the United States have found that search engine results are a form of speech that is protected by the Constitution, and cases in Europe concerning liability for autocomplete suggestions have led to varied results. Beyond these instances, insight into how courts handle algorithmic speech are few and far between.By focusing on three categories of algorithmic speech, defined as curated production, interactive/responsive production, and semiautonomous production, this Article analyzes these various forms of algorithmic speech within the international framework for freedom of expression. After a brief introduction of that framework and a look towards approaches to algorithmic speech in the United States, the Article then examines whether the creators or controllers of different forms of algorithms should be considered content providers or mere intermediaries, the determination of which ultimately has implications for liability, which is also explored. The Article then looks at possible interferences with algorithmic speech, and how such interferences may be examined under the three-part test—particular attention is paid to the balancing of rights and interests at play—in order to answer the question of the extent to which algorithmic speech is worthy of protection under international standards of freedom of expression. Finally, other relevant issues surrounding algorithmic speech are discussed that will have an impact going forward, many of which involve questions of policy and societal values that accompany granting algorithmic speech protection. Show less
The thesis is part of a bigger project, the HEPGAME (High Energy Physics Game). The main objective for HEPGAME is the utilization of AI solutions, particularly by using MCTS for simplification of... Show moreThe thesis is part of a bigger project, the HEPGAME (High Energy Physics Game). The main objective for HEPGAME is the utilization of AI solutions, particularly by using MCTS for simplification of HEP calculations. One of the issues is solving mathematical expressions of interest with millions of terms. These calculations can be solved with the FORM program, which is software for symbolic manipulation. Since these calculations are computationally intensive and take a large amount of time, the FORM program was parallelized to solve them in a reasonable amount of time.Therefore, any new algorithm based on MCTS, should also be parallelized. This requirement was behind the problem statement of the thesis: “How do we design a structured pattern-based parallel programming approach for efficient parallelism of MCTS for both multi-core and manycore shared-memory machines?”.To answer this question, the thesis approached the MCTS parallelization problem in three levels: (1) implementation level, (2) data structure level, and (3) algorithm level.In the implementation level, we proposed task-level parallelization over thread-level parallelization. Task-level parallelization provides us with efficient parallelism for MCTS to utilize cores on both multi-core and manycore machines.In the data structure level, we presented a lock-free data structure that guarantees the correctness. A lock-free data structure (1) removes the synchronization overhead when a parallel program needs many tasks to feed its cores and (2) improves both performance and scalability.In the algorithm level, we first explained how to use pipeline pattern for parallelization of MCTS to overcome search overhead. Then, through a step by step approach, we were able to propose and detail the structured parallel programming approach for Monte Carlo Tree Search. Show less
A P300-based Brain Computer Interface character speller, also known as P300 speller, has been an important communication pathway, under extensive research, for people who lose motor ability, such... Show moreA P300-based Brain Computer Interface character speller, also known as P300 speller, has been an important communication pathway, under extensive research, for people who lose motor ability, such as patients with Amyotrophic Lateral Sclerosis or spinal-cord injury because a P300 speller allows human-beings to directly spell characters using eye-gazes, thereby building communication between the human brain and a computer. Unfortunately, P300 spellers are still not used in human’s daily life and remain in an experimental stage at research labs. The reason for this situation is that the performance and the efficiency of current P300 spellers are unacceptably low for BCI users in their daily life. Therefore, in this thesis, we have focused our attention on developing high performance and efficient P300 spellers in order to bring P300 spellers into practical use. More specifically, in order to increase the performance of a P300 speller, we have developed methods to increase the character spelling accuracy and the Information Transfer Rate. In order to improve the efficiency of a P300 speller, we have developed methods to reduce the number of sensors needed to acquire EEG signals as well as to reduce the complexity of the classifier used in a P300 speller without losing the performance. Show less
This study presents an agent-based simulation model exploring the patterns of presence and absence of Late Pleistocene Neanderthals in western Europe. HomininSpace implements a parameterized... Show moreThis study presents an agent-based simulation model exploring the patterns of presence and absence of Late Pleistocene Neanderthals in western Europe. HomininSpace implements a parameterized generic demographic and social model of hominin dispersal while avoiding parameter value biases and explicitly modelled handicaps. Models are simulated through time within a high-resolution environment where reconstructed temperatures and precipitation levels influence the carrying capacity of the landscape. Model parameter values are assigned and varied automatically while optimizing the match with Neanderthal archaeology using a Genetic Algorithm (GA) inspired by the processes of natural selection. The system is able to traverse the huge parameter space that is created by the complete set of all possible parameter value combinations to find those values that will result in a simulation that matches well with archaeological data in the form of radiometrically obtained presence data. Show less