Through the quantification of physical activity energy expenditure (PAEE), health care monitoring has the potential to stimulate vital and healthy ageing, inducing behavioural changes in older... Show moreThrough the quantification of physical activity energy expenditure (PAEE), health care monitoring has the potential to stimulate vital and healthy ageing, inducing behavioural changes in older people and linking these to personal health gains. To be able to measure PAEE in a health care perspective, methods from wearable accelerometers have been developed, however, mainly targeted towards younger people. Since elderly subjects differ in energy requirements and range of physical activities, the current models may not be suitable for estimating PAEE among the elderly. Furthermore, currently available methods seem to be either simple but non-generalizable or require elaborate (manual) feature construction steps. Because past activities influence present PAEE, we propose a modeling approach known for its ability to model sequential data, the recurrent neural network (RNN). To train the RNN for an elderly population, we used the growing old together validation (GOTOV) dataset with 34 healthy participants of 60 years and older (mean 65 years old), performing 16 different activities. We used accelerometers placed on wrist and ankle, and measurements of energy counts by means of indirect calorimetry. After optimization, we propose an architecture consisting of an RNN with 3 GRU layers and a feedforward network combining both accelerometer and participant-level data. Our efforts included switching mean to standard deviation for down-sampling the input data and combining temporal and static data (person-specific details such as age, weight, BMI). The resulting architecture produces accurate PAEE estimations while decreasing training input and time by a factor of 10. Subsequently, compared to the state-of-the-art, it is capable to integrate longer activity data which lead to more accurate estimations of low intensity activities EE. It can thus be employed to investigate associations of PAEE with vitality parameters of older people related to metabolic and cognitive health and mental well-being. Show less
Hollander, D. den; Dirkson, A.; Verberne, S.; Kraaij, W.; Oortmerssen, G. van; Gelderblom, H.; ... ; Husson, O. 2022
Scientific data analyses often combine several computational tools in automated pipelines, or workflows. Thousands of such workflows have been used in the life sciences, though their composition... Show moreScientific data analyses often combine several computational tools in automated pipelines, or workflows. Thousands of such workflows have been used in the life sciences, though their composition has remained a cumbersome manual process due to a lack of standards for annotation, assembly, and implementation. Recent technological advances have returned the long-standing vision of automated workflow composition into focus. This article summarizes a recent Lorentz Center workshop dedicated to automated composition of workflows in the life sciences. We survey previous initiatives to automate the composition process, and discuss the current state of the art and future perspectives. We start by drawing the "big picture" of the scientific workflow development life cycle, before surveying and discussing current methods, technologies and practices for semantic domain modelling, automation in workflow development, and workflow assessment. Finally, we derive a roadmap of individual and community-based actions to work toward the vision of automated workflow development in the forthcoming years. A central outcome of the workshop is a general description of the workflow life cycle in six stages: 1) scientific question or hypothesis, 2) conceptual workflow, 3) abstract workflow, 4) concrete workflow, 5) production workflow, and 6) scientific results. The transitions between stages are facilitated by diverse tools and methods, usually incorporating domain knowledge in some form. Formal semantic domain modelling is hard and often a bottleneck for the application of semantic technologies. However, life science communities have made considerable progress here in recent years and are continuously improving, renewing interest in the application of semantic technologies for workflow exploration, composition and instantiation. Combined with systematic benchmarking with reference data and large-scale deployment of production-stage workflows, such technologies enable a more systematic process of workflow development than we know today. We believe that this can lead to more robust, reusable, and sustainable workflows in the future. Show less
We provide a sound and relatively complete Hoare logic for reasoning about partial correctness of recursive procedures in presence of local variables and the call-by-value parameter mechanism and... Show moreWe provide a sound and relatively complete Hoare logic for reasoning about partial correctness of recursive procedures in presence of local variables and the call-by-value parameter mechanism and in which the correctness proofs support contracts and are linear in the length of the program. We argue that in spite of the fact that Hoare logics for recursive procedures were intensively studied, no such logic has been proposed in the literature. Show less
Neonatal sepsis is a major cause of death and disability in newborns. Commonly used biomarkers for diagnosis and evaluation of treatment response lack sufficient sensitivity or specificity.... Show moreNeonatal sepsis is a major cause of death and disability in newborns. Commonly used biomarkers for diagnosis and evaluation of treatment response lack sufficient sensitivity or specificity. Additionally, new targets to treat the dysregulated immune response are needed, as are methods to effectively screen drugs for these targets. Available research methods have hitherto not yielded the breakthroughs required to significantly improve disease outcomes, we therefore describe the potential of zebrafish (Danio rerio) larvae as preclinical model for neonatal sepsis. In biomedical research, zebrafish larvae combine the complexity of a whole organism with the convenience and high-throughput potential of in vitro methods. This paper illustrates that zebrafish exhibit an immune system that is remarkably similar to humans, both in terms of types of immune cells and signaling pathways. Moreover, the developmental state of the larval immune system is highly similar to human neonates. We provide examples of zebrafish larvae being used to study infections with pathogens commonly causing neonatal sepsis and discuss known limitations. We believe this species could expedite research into immune regulation during neonatal sepsis and may hold keys for the discovery of new biomarkers and novel treatment targets as well as for screening of targeted drug therapies. Show less
Information cascade size prediction is one of the primary challenges for understanding the diffusion of information. Traditional feature-based methods heavily rely on the quality of handcrafted... Show moreInformation cascade size prediction is one of the primary challenges for understanding the diffusion of information. Traditional feature-based methods heavily rely on the quality of handcrafted features, requiring extensive domain knowledge and hard to generalize to new domains. Recently, inspired by the success of deep learning in computer vision and natural language processing, researchers have developed neural network-based approaches for tackling this problem. However, existing deep learning-based methods either focused on modeling the temporal characteristics of cascades but ignored the structural information or failed to take the order-scale and position-scale into consideration in modeling structures of information propagation. This paper proposed a novel graph neural network-based model, called MUCas, to learn the latent representations of cascade graphs from a multi-scale perspective, which can make full use of the direction-scale, high-order-scale, position-scale, and dynamic-scale of cascades via a newly designed MUlti-scale Graph Capsule Network (MUG-Caps) and the influence-attention mechanism. Extensive experiments conducted on two real-world data sets demonstrate that our MUCas significantly outperforms the state-of-the-art approaches. Show less
Zwaard, S. van der; Koppens, T.F.P.; Weide, G.; Levels, K.; Hofmijster, M.J.; Koning, J.J. de; Jaspers, R.T. 2021
Instant analysis of cybersecurity reports is a fundamental challenge for security experts as an immeasurable amount of cyber information is generated on a daily basis, which necessitates automated... Show moreInstant analysis of cybersecurity reports is a fundamental challenge for security experts as an immeasurable amount of cyber information is generated on a daily basis, which necessitates automated information extraction tools to facilitate querying and retrieval of data. Hence, we present Open-CyKG: an Open Cyber Threat Intelligence (CTI) Knowledge Graph (KG) framework that is constructed using an attention-based neural Open Information Extraction (OIE) model to extract valuable cyber threat information from unstructured Advanced Persistent Threat (APT) reports. More specifically, we first identify relevant entities by developing a neural cybersecurity Named Entity Recognizer (NER) that aids in labeling relation triples generated by the OIE model. Afterwards, the extracted structured data is canonicalized to build the KG by employing fusion techniques using word embeddings. As a result, security professionals can execute queries to retrieve valuable information from the Open-CyKG framework. Experimental results demonstrate that our proposed components that build up Open-CyKG outperform state-of-the-art models.1 (c) 2021 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). Show less
Schreuter, D.; Putten, P.W.H. van der; Lamers, M.H. 2021
Conversational artificial agents and artificially intelligent (AI) voice assistants are becoming increasingly popular. Digital virtual assistants such as Siri, or conversational devices such as... Show moreConversational artificial agents and artificially intelligent (AI) voice assistants are becoming increasingly popular. Digital virtual assistants such as Siri, or conversational devices such as Amazon Echo or Google Home are permeating everyday life, and are designed to be more and more humanlike in their speech. This study investigates the effect this can have on one’s conformity with an AI assistant. In the 1950s, Solomon Asch’s already demonstrated the power and danger of conformity amongst people. In these classical experiments test persons were asked to answer relatively simple questions, whilst others pretending to be participants tried to convince the test person to give wrong answers. These studies were later replicated with embodied robots, but these physical robots are still rare. In light of our increasing reliance on AI assistants, this study investigates to what extent an individual will conform to a disembodied virtual assistant. We also investigate if there is a difference between a group that interacts with an assistant that communicates through text, one that has a robotic voice and one that has a humanlike voice. The assistant attempts to subtly influence participants’ final responses in a general knowledge quiz, and we measure how often participants change their answer after having been given advice. Results show that participants conformed significantly more often to the assistant with a human voice than the one that communicated through text. Show less
Winderickx, J.; Braeken, A.; Singelee, D.; Mentens, N. 2021
Small- and medium-sized enterprises (SMEs) frequently experience cyberattacks, but often do not have the means to counter these attacks. Therefore, cybersecurity researchers and practitioners need... Show moreSmall- and medium-sized enterprises (SMEs) frequently experience cyberattacks, but often do not have the means to counter these attacks. Therefore, cybersecurity researchers and practitioners need to aid SMEs in their defence against cyber threats. Research has shown that SMEs require solutions that are automated and adapted to their context. In recent years, we have seen a surge in initiatives to share cyber threat intelligence (CTI) to improve collective cybersecurity resilience. Shared CTI has the potential to answer the SME call for automated and adaptable solutions. Sadly, as we demonstrate in this paper, current shared intelligence approaches scarcely address SME needs. We must investigate how shared CTI can be used to improve SME cybersecurity resilience. In this paper, we tackle this challenge using a systematic review to discover current state-of-the-art approaches to using shared CTI. We find that threat intelligence sharing platforms such as MISP have the potential to address SME needs, provided that the shared intelligence is turned into actionable insights. Based on this observation, we developed a prototype application that processes MISP data automatically, prioritises cybersecurity threats for SMEs, and provides SMEs with actionable recommendations tailored to their context. Subsequent evaluations in operational environments will help to improve our application, such that SMEs are enabled to thwart cyberattacks in future. Show less
Community detection is a well-established method for studying the meso-scale structure of social networks. Applying a community detection algorithm results in a division of a network into... Show moreCommunity detection is a well-established method for studying the meso-scale structure of social networks. Applying a community detection algorithm results in a division of a network into communities that is often used to inspect and reason about community membership of specific nodes. This micro-level interpretation step of community structure is a crucial step in typical social science research. However, the methodological caveat in this step is that virtually all modern community detection methods are non-deterministic and based on randomization and approximated results. This needs to be explicitly taken into consideration when reasoning about community membership of individual nodes. To do so, we propose a metric of community membership consistency, that provides node-level insights in how reliable the placement of that node into a community really is. In addition, it enables us to distinguish the community core members of a community. The usefulness of the proposed metrics is demonstrated on corporate board interlock networks, in which weighted links represent shared senior level directors between firms. Results suggest that the community structure of global business groups is centered around persistent communities consisting of core countries tied by geographical and cultural proximity. In addition, we identify fringe countries that appear to associate with a number of different global business communities. Show less
In polypharmacology drugs are required to bind to multiple specific targets, for example to enhance efficacy or to reduce resistance formation. Although deep learning has achieved a breakthrough in... Show moreIn polypharmacology drugs are required to bind to multiple specific targets, for example to enhance efficacy or to reduce resistance formation. Although deep learning has achieved a breakthrough in de novo design in drug discovery, most of its applications only focus on a single drug target to generate drug-like active molecules. However, in reality drug molecules often interact with more than one target which can have desired (polypharmacology) or undesired (toxicity) effects. In a previous study we proposed a new method named DrugEx that integrates an exploration strategy into RNN-based reinforcement learning to improve the diversity of the generated molecules. Here, we extended our DrugEx algorithm with multi-objective optimization to generate drug-like molecules towards multiple targets or one specific target while avoiding off-targets (the two adenosine receptors, A1AR and A2AAR, and the potassium ion channel hERG in this study). In our model, we applied an RNN as the agent and machine learning predictors as the environment. Both the agent and the environment were pre-trained in advance and then interplayed under a reinforcement learning framework. The concept of evolutionary algorithms was merged into our method such that crossover and mutation operations were implemented by the same deep learning model as the agent. During the training loop, the agent generates a batch of SMILES-based molecules. Subsequently scores for all objectives provided by the environment are used to construct Pareto ranks of the generated molecules. For this ranking a non-dominated sorting algorithm and a Tanimoto-based crowding distance algorithm using chemical fingerprints are applied. Here, we adopted GPU acceleration to speed up the process of Pareto optimization. The final reward of each molecule is calculated based on the Pareto ranking with the ranking selection algorithm. The agent is trained under the guidance of the reward to make sure it can generate desired molecules after convergence of the training process. All in all we demonstrate generation of compounds with a diverse predicted selectivity profile towards multiple targets, offering the potential of high efficacy and low toxicity. Show less
Quantum algorithms for solving the Quantum Linear System (QLS) problem are among the most investigated quantum algorithms of recent times, with potential applications including the solution of... Show moreQuantum algorithms for solving the Quantum Linear System (QLS) problem are among the most investigated quantum algorithms of recent times, with potential applications including the solution of computationally intractable differential equations and speed-ups in machitie learning. A fundamental parameter governing the efficiency of QLS solvers is is, the condition number of the coefficient matrix A, as it has been known since the inception of the QLS problem that for worst-case instances the runtime scales at least linearly in kappa [1]. However, for the case of positive-definite matrices classical algorithms can solve linear systems with a runtime scaling as root kappa, a quadratic improvement compared to the the indefinite case. It is then natural to ask whether QLS solvers may hold an analogous improvement. In this work we answer the question in the negative, showing that solving a QLS entails a runtime linear in kappa also when A is positive definite. We then identify broad classes of positive-definite QLS where this lower bound can be circumvented and present two new quantum algorithms featuring a quadratic speed-up in kappa: the first is based on efficiently implementing a matrix-block-encoding of A(-1), the second constructs a decomposition of the form A=LL dagger to precondition the system. These methods are widely applicable and both allow to efficiently solve BQP-complete problems. Show less
yalouz, S.; Senjean, B.; Miatto, F.; Dunjko, V. 2021
Variational quantum algorithms (VQA) are considered as some of the most promising methods to determine the properties of complex strongly correlated quantum many-body systems, especially from the... Show moreVariational quantum algorithms (VQA) are considered as some of the most promising methods to determine the properties of complex strongly correlated quantum many-body systems, especially from the perspective of devices available in the near term. In this context, the development of efficient quantum circuit ansatze to encode a many-body wavefunction is one of the keys for the success of a VQA. Great efforts have been invested to study the potential of current quantum devices to encode the eigenstates of fermionic systems, but little is known about the encoding of bosonic systems. In this work, we investigate the encoding of the ground state of the (simple but rich) attractive Bose-Hubbard model using a Continuous-Variable (CV) photonic-based quantum circuit. We introduce two different ansatz architectures and demonstrate that the proposed continuous variable quantum circuits can accurately encode (with a fidelity higher than 99%) the strongly correlated many-boson wavefunction with just a few layers, in all many-body regimes and for different number of bosons and initial states. Beyond the study of the suitability of the ansatz to approximate the ground states of many-boson systems, we also perform initial evaluations of the use of the ansatz in a variational quantum eigensolver algorithm to find it through energy minimization. To this end we also introduce a scheme to measure the Hamiltonian energy in an experimental system , and study the effect of sampling noise. Show less
Yang, X.F.; Gao, S.H.; Guo, L.; Wang, B.; Jia, Y.Y.; Zhou, J.; ... ; Ye, K. 2021
Papaver species P. setigerum, P. rhoeas, and P. somniferum accumulates different levels of morphine and noscapine. Here, the authors report the improved genome assembly of P. somniferum and de novo... Show morePapaver species P. setigerum, P. rhoeas, and P. somniferum accumulates different levels of morphine and noscapine. Here, the authors report the improved genome assembly of P. somniferum and de novo assembly of the other two species, and reveal the evolution of the benzylisoquinoline alkaloids biosynthetic pathway.For millions of years, plants evolve plenty of structurally diverse secondary metabolites (SM) to support their sessile lifestyles through continuous biochemical pathway innovation. While new genes commonly drive the evolution of plant SM pathway, how a full biosynthetic pathway evolves remains poorly understood. The evolution of pathway involves recruiting new genes along the reaction cascade forwardly, backwardly, or in a patchwork manner. With three chromosome-scale Papaver genome assemblies, we here reveal whole-genome duplications (WGDs) apparently accelerate chromosomal rearrangements with a nonrandom distribution towards SM optimization. A burst of structural variants involving fusions, translocations and duplications within 7.7 million years have assembled nine genes into the benzylisoquinoline alkaloids gene cluster, following a punctuated patchwork model. Biosynthetic gene copies and their total expression matter to morphinan production. Our results demonstrate how new genes have been recruited from a WGD-induced repertoire of unregulated enzymes with promiscuous reactivities to innovate efficient metabolic pathways with spatiotemporal constraint. Show less
This paper investigates how often the popular configurations of Differential Evolution generate solutions outside the feasible domain. Following previous publications in the field, we argue that... Show moreThis paper investigates how often the popular configurations of Differential Evolution generate solutions outside the feasible domain. Following previous publications in the field, we argue that what the algorithm does with such solutions and how often this has to happen is important for the overall performance of the algorithm and interpretation of results. Significantly more solutions than what is usually assumed by practitioners have to undergo some sort of 'correction' to conform with the definition of the problem's search domain. A wide range of popular Differential Evolution configurations is considered in this study. Conclusions are made regarding the effect the Differential Evolution components and parameter settings have on the distribution of percentages of infeasible solutions generated in a series of independent runs. Results shown in this study suggest strong dependencies between percentages of generated infeasible solutions and every aspect mentioned above. Further investigation of the distribution of percentages of generated infeasible solutions is required. Show less