The work described in this thesis had two objectives, specifically focusing on people aged 70 years and older: first, we aimed to investigate the associations between several thrombosis-related... Show moreThe work described in this thesis had two objectives, specifically focusing on people aged 70 years and older: first, we aimed to investigate the associations between several thrombosis-related risk factors described in young and middle-aged populations and the risk of venous thrombosis (VT) in the elderly; second, we aimed to provide insight into several long-term consequences (i.e., health-related quality of life (HRQoL) and long-term risk of mortality) after a first VT at old age. Show less
BACKGROUND\nOBJECTIVES\nPATIENTS AND METHODS\nRESULTS\nCONCLUSIONS\nLinezolid in combination with rifampicin has been used in treatment of infective endocarditis especially for patients infected... Show moreBACKGROUND\nOBJECTIVES\nPATIENTS AND METHODS\nRESULTS\nCONCLUSIONS\nLinezolid in combination with rifampicin has been used in treatment of infective endocarditis especially for patients infected with staphylococci.\nBecause rifampicin has been reported to reduce the plasma concentration of linezolid, the present study aimed to characterize the population pharmacokinetics of linezolid for the purpose of quantifying an effect of rifampicin cotreatment. In addition, the possibility of compensation by dosage adjustments was evaluated.\nPharmacokinetic measurements were performed in 62 patients treated with linezolid for left-sided infective endocarditis in the Partial Oral Endocarditis Treatment (POET) trial. Fifteen patients were cotreated with rifampicin. A total of 437 linezolid plasma concentrations were obtained. The pharmacokinetic data were adequately described by a one-compartment model with first-order absorption and first-order elimination.\nWe demonstrated a substantial increase of linezolid clearance by 150% (95% CI: 78%-251%), when combined with rifampicin. The final model was evaluated by goodness-of-fit plots showing an acceptable fit, and a visual predictive check validated the model. Model-based dosing simulations showed that rifampicin cotreatment decreased the PTA of linezolid from 94.3% to 34.9% and from 52.7% to 3.5% for MICs of 2 mg/L and 4 mg/L, respectively.\nA substantial interaction between linezolid and rifampicin was detected in patients with infective endocarditis, and the interaction was stronger than previously reported. Model-based simulations showed that increasing the linezolid dose might compensate without increasing the risk of adverse effects to the same degree. Show less
Bock, M.; Theut, A.M.; Hasselt, J.G.C. van; Wang, H.; Fuursted, K.; Høiby, N.; ... ; Moser, C. 2023
BACKGROUND\nMETHODS\nRESULTS\nCONCLUSION\nIn the POET (Partial Oral Endocarditis Treatment) trial, oral step-down therapy was noninferior to full-length intravenous antibiotic administration. The... Show moreBACKGROUND\nMETHODS\nRESULTS\nCONCLUSION\nIn the POET (Partial Oral Endocarditis Treatment) trial, oral step-down therapy was noninferior to full-length intravenous antibiotic administration. The aim of the present study was to perform pharmacokinetic/pharmacodynamic analyses for oral treatments of infective endocarditis to assess the probabilities of target attainment (PTAs).\nPlasma concentrations of oral antibiotics were measured at day 1 and 5. Minimal inhibitory concentrations (MICs) were determined for the bacteria causing infective endocarditis (streptococci, staphylococci, or enterococci). Pharmacokinetic/pharmacodynamic targets were predefined according to literature using time above MIC or the ratio of area under the curve to MIC. Population pharmacokinetic modeling and pharmacokinetic/pharmacodynamic analyses were done for amoxicillin, dicloxacillin, linezolid, moxifloxacin, and rifampicin, and PTAs were calculated.\nA total of 236 patients participated in this POET substudy. For amoxicillin and linezolid, the PTAs were 88%-100%. For moxifloxacin and rifampicin, the PTAs were 71%-100%. Using a clinical breakpoint for staphylococci, the PTAs for dicloxacillin were 9%-17%.Seventy-four patients at day 1 and 65 patients at day 5 had available pharmacokinetic and MIC data for two oral antibiotics. Of those, 13 patients at day 1 and 14 patients at day 5 did only reach the target for one antibiotic. One patient did not reach target for any of the two antibiotics.\nFor the individual orally administered antibiotic, the majority of patients reached the target level. Patients with sub-target levels were compensated by the administration of two different antibiotics. The findings support the efficacy of oral step-down antibiotic treatment in patients with infective endocarditis. Show less
OBJECTIVE\nMETHODS\nRESULTS\nCONCLUSIONS\nSIGNIFICANCE\nDistinguishing normal, neuropathic and myopathic electromyography (EMG) traces can be challenging. We aimed to create an automated time... Show moreOBJECTIVE\nMETHODS\nRESULTS\nCONCLUSIONS\nSIGNIFICANCE\nDistinguishing normal, neuropathic and myopathic electromyography (EMG) traces can be challenging. We aimed to create an automated time series classification algorithm.\nEMGs of healthy controls (HC, n = 25), patients with amyotrophic lateral sclerosis (ALS, n = 20) and inclusion body myositis (IBM, n = 20), were retrospectively selected based on longitudinal clinical follow-up data (ALS and HC) or muscle biopsy (IBM). A machine learning pipeline was applied based on 5-second EMG fragments of each muscle. Diagnostic yield expressed as area under the curve (AUC) of a receiver-operator characteristics curve, accuracy, sensitivity, and specificity were determined per muscle (muscle-level) and per patient (patient-level).\nDiagnostic yield of the classification ALS vs. HC was: AUC 0.834 ± 0.014 at muscle-level and 0.856 ± 0.009 at patient-level. For the classification HC vs. IBM, AUC was 0.744 ± 0.043 at muscle-level and 0.735 ± 0.029 at patient-level. For the classification ALS vs. IBM, AUC was 0.569 ± 0.024 at muscle-level and 0.689 ± 0.035 at patient-level.\nAn automated time series classification algorithm can distinguish EMGs from healthy individuals from those of patients with ALS with a high diagnostic yield. Using longer EMG fragments with different levels of muscle activation may improve performance.\nIn the future, machine learning algorithms may help improve the diagnostic accuracy of EMG examinations. Show less
All tissue development and replenishment relies upon the breaking of symmetries leading to the morphological and operational differentiation of progenitor cells into more specialized cells. One of... Show moreAll tissue development and replenishment relies upon the breaking of symmetries leading to the morphological and operational differentiation of progenitor cells into more specialized cells. One of the main engines driving this process is the Notch signal transduction pathway, a ubiquitous signalling system found in the vast majority of metazoan cell types characterized to date. Broadly speaking, Notch receptor activity is governed by a balance between two processes: 1) intercellular Notch transactivation triggered via interactions between receptors and ligands expressed in neighbouring cells; 2) intracellular cis inhibition caused by ligands binding to receptors within the same cell. Additionally, recent reports have also unveiled evidence of cis activation. Whilst context-dependent Notch receptor clustering has been hypothesized, to date, Notch signalling has been assumed to involve an interplay between receptor and ligand monomers. In this study, we demonstrate biochemically, through a mutational analysis of DLL4, both in vitro and in tissue culture cells, that Notch ligands can efficiently self-associate. We found that the membrane proximal EGF-like repeat of DLL4 was necessary and sufficient to promote oligomerization/dimerization. Mechanistically, our experimental evidence supports the view that DLL4 ligand dimerization is specifically required for cis-inhibition of Notch receptor activity. To further substantiate these findings, we have adapted and extended existing ordinary differential equation-based models of Notch signalling to take account of the ligand dimerization-dependent cis-inhibition reported here. Our new model faithfully recapitulates our experimental data and improves predictions based upon published data. Collectively, our work favours a model in which net output following Notch receptor/ligand binding results from ligand monomer-driven Notch receptor transactivation (and cis activation) counterposed by ligand dimer-mediated cis-inhibition.Author summary The growth and maintenance of tissues is a fundamental characteristic of metazoan life, controlled by a highly conserved core of cell signal transduction networks. One such pathway, the Notch signalling system, plays a unique role in these phenomena by orchestrating the generation of the phenotypic and genetic asymmetries which underlie tissue development and remodeling. At the molecular level, it achieves this via two specific types of receptor/ligand interaction: intercellular binding of receptors and ligands expressed in neighbouring cells, which triggers receptor activation (transactivation); intracellular receptor/ligand binding within the same cell which blocks receptor activation (cis inhibition). Together, these counterposed mechanisms determine the strength, the direction and the specificity of Notch signalling output. Whilst, the basic mechanisms of receptor transactivation have been delineated in some detail, the precise nature of cis inhibition has remained enigmatic. Through a combination of experimental approaches and computational modelling, in this study, we present a new model of Notch signalling in which ligand monomers promote Notch receptor transactivation, whereas cis inhibition is induced optimally via ligand dimers. This is the first model to include a concrete molecular distinction, in terms of ligand configuration, between the main branches of Notch signalling. Our model faithfully recapitulates both our presented experimental results as well as the recently published work of others, and provides a novel perspective for understanding Notch-regulated biological processes such as embryo development and angiogenesis.Competing Interest StatementThe authors have declared no competing interest. Show less
Thirty years, 1993–2023, is a huge time frame in science. We address some major developments in the field of evolutionary algorithms, with applications in parameter optimization, over these 30... Show moreThirty years, 1993–2023, is a huge time frame in science. We address some major developments in the field of evolutionary algorithms, with applications in parameter optimization, over these 30 years. These include the covariance matrix adaptation evolution strategy and some fast-growing fields such as multimodal optimization, surrogate-assisted optimization, multiobjective optimization, and automated algorithm design. Moreover, we also discuss particle swarm optimization and differential evolution, which did not exist 30 years ago, either. One of the key arguments made in the paper is that we need fewer algorithms, not more, which, however, is the current trend through continuously claiming paradigms from nature that are suggested to be useful as new optimization algorithms. Moreover, we argue that we need proper benchmarking procedures to sort out whether a newly proposed algorithm is useful or not. We also briefly discuss automated algorithm design approaches, including configurable algorithm design frameworks, as the proposed next step toward designing optimization algorithms automatically, rather than by hand. Show less
ObjectiveDistinguishing normal, neuropathic and myopathic electromyography (EMG) traces can be challenging. We aimed to create an automated time series classification algorithm.MethodsEMGs of... Show moreObjectiveDistinguishing normal, neuropathic and myopathic electromyography (EMG) traces can be challenging. We aimed to create an automated time series classification algorithm.MethodsEMGs of healthy controls (HC, n = 25), patients with amyotrophic lateral sclerosis (ALS, n = 20) and inclusion body myositis (IBM, n = 20), were retrospectively selected based on longitudinal clinical follow-up data (ALS and HC) or muscle biopsy (IBM). A machine learning pipeline was applied based on 5-second EMG fragments of each muscle. Diagnostic yield expressed as area under the curve (AUC) of a receiver-operator characteristics curve, accuracy, sensitivity, and specificity were determined per muscle (muscle-level) and per patient (patient-level).ResultsDiagnostic yield of the classification ALS vs. HC was: AUC 0.834 ± 0.014 at muscle-level and 0.856 ± 0.009 at patient-level. For the classification HC vs. IBM, AUC was 0.744 ± 0.043 at muscle-level and 0.735 ± 0.029 at patient-level. For the classification ALS vs. IBM, AUC was 0.569 ± 0.024 at muscle-level and 0.689 ± 0.035 at patient-level.ConclusionsAn automated time series classification algorithm can distinguish EMGs from healthy individuals from those of patients with ALS with a high diagnostic yield. Using longer EMG fragments with different levels of muscle activation may improve performance. Show less
Global patterns of regional (gamma) plant diversity are relatively well known, but whether these patterns hold for local communities, and the dependence on spatial grain, remain controversial.... Show moreGlobal patterns of regional (gamma) plant diversity are relatively well known, but whether these patterns hold for local communities, and the dependence on spatial grain, remain controversial. Using data on 170,272 georeferenced local plant assemblages, we created global maps of alpha diversity (local species richness) for vascular plants at three different spatial grains, for forests and non-forests. We show that alpha diversity is consistently high across grains in some regions (for example, Andean-Amazonian foothills), but regional ‘scaling anomalies’ (deviations from the positive correlation) exist elsewhere, particularly in Eurasian temperate forests with disproportionally higher fine-grained richness and many African tropical forests with disproportionally higher coarse-grained richness. The influence of different climatic, topographic and biogeographical variables on alpha diversity also varies across grains. Our multi-grain maps return a nuanced understanding of vascular plant biodiversity patterns that complements classic maps of biodiversity hotspots and will improve predictions of global change effects on biodiversity. Show less
Real-world optimization scenarios under uncertainty and noise are typically handled with robust optimization techniques, which re-formulate the original optimization problem into a robust... Show moreReal-world optimization scenarios under uncertainty and noise are typically handled with robust optimization techniques, which re-formulate the original optimization problem into a robust counterpart, e.g., by taking an average of the function values over different perturbations to a specific input. Solving the robust counterpart instead of the original problem can significantly increase the associated computational cost, which is often overlooked in the literature to the best of our knowledge. Such an extra cost brought by robust optimization might depend on the problem landscape, the dimensionality, the severity of the uncertainty, and the formulation of the robust counterpart.This paper targets an empirical approach that evaluates and compares the computational cost brought by different robustness formulations in Kriging-based optimization on a wide combination (300 test cases) of problems, uncertainty levels, and dimensions. We mainly focus on the CPU time taken to find robust solutions, and choose five commonly-applied robustness formulations: `"mini-max robustness'', "mini-max regret robustness'', "expectation-based robustness'', ``dispersion-based robustness'', and "composite robustness'' respectively. We assess the empirical performance of these robustness formulations in terms of a fixed budget and a fixed target analysis, from which we find that "mini-max robustness'' is the most practical formulation w.r.t.~the associated computational cost. Show less
As combinatorial optimization is one of the main quantum computing applications, many methods based on parameterized quantum circuits are being developed. In general, a set of parameters are being... Show moreAs combinatorial optimization is one of the main quantum computing applications, many methods based on parameterized quantum circuits are being developed. In general, a set of parameters are being tweaked to optimize a cost function out of the quantum circuit output. One of these algorithms, the Quantum Approximate Optimization Algorithm stands out as a promising approach to tackling combinatorial problems. However, finding the appropriate parameters is a difficult task. Although QAOA exhibits concentration properties, they can depend on instances characteristics that may not be easy to identify, but may nonetheless offer useful information to find good parameters. In this work, we study unsupervised Machine Learning approaches for setting these parameters without optimization. We perform clustering with the angle values but also instances encodings (using instance features or the output of a variational graph autoencoder), and compare different approaches. These angle-finding strategies can be used to reduce calls to quantum circuits when leveraging QAOA as a subroutine. We showcase them within Recursive-QAOA up to depth 3 where the number of QAOA parameters used per iteration is limited to 3, achieving a median approximation ratio of 0.94 for MaxCut over 200 Erdős-Rényi graphs. We obtain similar performances to the case where we extensively optimize the angles, hence saving numerous circuit calls. Show less
Improving resource efficiency (RE) is an important objective of the Sustainable Development Goals. In this study we find a strong exponential relationship between economic complexity index (ECI)... Show moreImproving resource efficiency (RE) is an important objective of the Sustainable Development Goals. In this study we find a strong exponential relationship between economic complexity index (ECI) and RE of countries. ECI measures the level of accumulated knowledge of a society enabling the products it makes. The relationship between ECI and RE is stronger for primary material importers and countries with stable institutions. Assessing a country's level of ECI also allows the outlook of future RE trends. We explain how ECI influences RE at the product level by establishing the product space for each country and by defining core products that contribute to a high product complexity index, high RE (i.e., unit price) and promising expansibility (i.e., core number), which indicates the potential to produce more advanced products in the future. Policies that improve economic complexity and invest in core products seem to be a priority to achieve sustainable development. Show less
Liu, Y.; Linz, H.; Fang, M.; Henning, T.; Wolf, S.; Flock, M.; ... ; Li, D. 2022