BACKGROUND\nOBJECTIVES\nPATIENTS AND METHODS\nRESULTS\nCONCLUSIONS\nLinezolid in combination with rifampicin has been used in treatment of infective endocarditis especially for patients infected... Show moreBACKGROUND\nOBJECTIVES\nPATIENTS AND METHODS\nRESULTS\nCONCLUSIONS\nLinezolid in combination with rifampicin has been used in treatment of infective endocarditis especially for patients infected with staphylococci.\nBecause rifampicin has been reported to reduce the plasma concentration of linezolid, the present study aimed to characterize the population pharmacokinetics of linezolid for the purpose of quantifying an effect of rifampicin cotreatment. In addition, the possibility of compensation by dosage adjustments was evaluated.\nPharmacokinetic measurements were performed in 62 patients treated with linezolid for left-sided infective endocarditis in the Partial Oral Endocarditis Treatment (POET) trial. Fifteen patients were cotreated with rifampicin. A total of 437 linezolid plasma concentrations were obtained. The pharmacokinetic data were adequately described by a one-compartment model with first-order absorption and first-order elimination.\nWe demonstrated a substantial increase of linezolid clearance by 150% (95% CI: 78%-251%), when combined with rifampicin. The final model was evaluated by goodness-of-fit plots showing an acceptable fit, and a visual predictive check validated the model. Model-based dosing simulations showed that rifampicin cotreatment decreased the PTA of linezolid from 94.3% to 34.9% and from 52.7% to 3.5% for MICs of 2 mg/L and 4 mg/L, respectively.\nA substantial interaction between linezolid and rifampicin was detected in patients with infective endocarditis, and the interaction was stronger than previously reported. Model-based simulations showed that increasing the linezolid dose might compensate without increasing the risk of adverse effects to the same degree. Show less
Bock, M.; Theut, A.M.; Hasselt, J.G.C. van; Wang, H.; Fuursted, K.; Høiby, N.; ... ; Moser, C. 2023
BACKGROUND\nMETHODS\nRESULTS\nCONCLUSION\nIn the POET (Partial Oral Endocarditis Treatment) trial, oral step-down therapy was noninferior to full-length intravenous antibiotic administration. The... Show moreBACKGROUND\nMETHODS\nRESULTS\nCONCLUSION\nIn the POET (Partial Oral Endocarditis Treatment) trial, oral step-down therapy was noninferior to full-length intravenous antibiotic administration. The aim of the present study was to perform pharmacokinetic/pharmacodynamic analyses for oral treatments of infective endocarditis to assess the probabilities of target attainment (PTAs).\nPlasma concentrations of oral antibiotics were measured at day 1 and 5. Minimal inhibitory concentrations (MICs) were determined for the bacteria causing infective endocarditis (streptococci, staphylococci, or enterococci). Pharmacokinetic/pharmacodynamic targets were predefined according to literature using time above MIC or the ratio of area under the curve to MIC. Population pharmacokinetic modeling and pharmacokinetic/pharmacodynamic analyses were done for amoxicillin, dicloxacillin, linezolid, moxifloxacin, and rifampicin, and PTAs were calculated.\nA total of 236 patients participated in this POET substudy. For amoxicillin and linezolid, the PTAs were 88%-100%. For moxifloxacin and rifampicin, the PTAs were 71%-100%. Using a clinical breakpoint for staphylococci, the PTAs for dicloxacillin were 9%-17%.Seventy-four patients at day 1 and 65 patients at day 5 had available pharmacokinetic and MIC data for two oral antibiotics. Of those, 13 patients at day 1 and 14 patients at day 5 did only reach the target for one antibiotic. One patient did not reach target for any of the two antibiotics.\nFor the individual orally administered antibiotic, the majority of patients reached the target level. Patients with sub-target levels were compensated by the administration of two different antibiotics. The findings support the efficacy of oral step-down antibiotic treatment in patients with infective endocarditis. Show less
Welcome to the Proceedings of the 12th Conference on Evolutionary Multi-Criterion Optimization (EMO), held in Leiden, The Netherlands, March 20–24, 2023 in hybrid format.Why hold EMO conferences?...Show moreWelcome to the Proceedings of the 12th Conference on Evolutionary Multi-Criterion Optimization (EMO), held in Leiden, The Netherlands, March 20–24, 2023 in hybrid format.Why hold EMO conferences? This question was discussed at EMO 2007 by its founders. The doubts regarding the viability of the conferences were fortunately cast away as the importance, need, and ubiquity of multi-criterion optimization keeps growing each year at a tremendous pace, impacting other areas and being influenced itself by them.For millennia optimization (improving things) has played a crucial role for humans. In more recent times, EMO (and optimization in general) has become important in sciencein areas such as physics, biology, economics, social sciences,medical sciences, and mathematics. For instance, Snell’s law was discovered by Willebrord Snellius through experimentation, only later it was realized that it can be derived from Fermat’s principle of least time, stating that light always chooses the path that is traveled in the least time.As such, the laws of nature can often be perceived as a process of optimization and optimal decision-making. Secondly, another use of optimization is the following. Many insights can be gained by looking at extremal objects (for instance, given 2n points in the plane no three of which lie on a line, n of them blue and n of them red, it is always possible to create n line segments by using the given points such that the endpoints have different colors and no two segments intersect – this can be understood by looking at the appropriate extremal object). Thirdly, methodologies and techniques developed in the EMO community have been empowering many practical scenarios: from finding the best taxation system, the best returns on investments while avoiding too high risks,discovering potent drug candidates with few side effects, to designing engineering structures that optimally balance the energy consumption and the environmental impact (e.g., minimizing the CO2 or CH4 emission).In the EMO conferences, we focus mainly on the evolutionary approaches to solving multi-criterion optimization and decision-making problems since the applicability of analytical/deterministic methods is often limited. For the scenarios where both categories of approaches are applicable, the hybridizations of analytical and evolutionary algorithms have appeared over the years, combining the strengths of both categories.Such hybridizations were also covered in the EMO conferences. In recent years, the EMOcommunity has been bridged with theMulti-Criterion Decision-Making (MCDM) community, which focuses more on the decision-making aspects of the same problem. According to the EMO tradition, also in this year’s event, many works are dedicated to designing and studying algorithms, ranging from novel algorithmic operators to thetheoretical analysis of existing ones. Notably, there are some contributions that connect EMO with Machine Learning/Artificial Intelligence, which draws more and more research interests nowadays. Also, appropriate attention – also as a tutorial – is paid tobenchmarking and empirical performance assessment, for instance, new benchmarking vi Preface problem sets. Furthermore, some submissions address real-world problems using EMO methodologies, which nicely complete the scope of the conference.Show less
The problem of approximating the Pareto front of a multiobjective optimization problem can be reformulated as the problem of finding a set that maximizes the hypervolume indicator. This paper...Show moreThe problem of approximating the Pareto front of a multiobjective optimization problem can be reformulated as the problem of finding a set that maximizes the hypervolume indicator. This paper establishes the analytical expression of the Hessian matrix of the mapping from a (fixed size) collection of n points in the d-dimensional decision space (or m dimensional objective space) to the scalar hypervolume indicator value. To define the Hessian matrix, the input set is vectorized, and the matrix is derived by analytical differentiation of the mapping from a vectorized set to the hypervolume indicator. The Hessian matrix plays a crucial role in second-order methods, such as the Newton-Raphson optimization method, and it can be used for the verification of local optimal sets. So far, the full analytical expression was only established and analyzed for the relatively simple bi-objective case. This paper will derive the full expression for arbitrary dimensions (m ≥ 2 objective functions). For the practically important three-dimensional case, we also provide an asymptotically efficient algorithm with time complexity in O(n log n) for the exact computation of the Hessian Matrix’ non-zero entries. We establish a sharp bound of 12m−6 for the number of non-zero entries. Also, for the general m-dimensional case, a compact recursive analytical expression is established, and its algorithmic implementation is discussed. Also, for the general case, some sparsity results can be established; these results are implied by the recursive expression. To validate and illustrate the analytically derived algorithms and results, we provide a few numerical examples using Python and Mathematica implementations. Open-source implementations of the algorithms and testing data are made available as a supplement to this paper.Show less
Tannemaat, M.R.; Kefalas, M.; Geraedts, V.J.; Remijn-Nelissen, L.; Verschuuren, A.J.M.; Koch, M.; ... ; Bäck, T.H.W. 2023
OBJECTIVE\nMETHODS\nRESULTS\nCONCLUSIONS\nSIGNIFICANCE\nDistinguishing normal, neuropathic and myopathic electromyography (EMG) traces can be challenging. We aimed to create an automated time... Show moreOBJECTIVE\nMETHODS\nRESULTS\nCONCLUSIONS\nSIGNIFICANCE\nDistinguishing normal, neuropathic and myopathic electromyography (EMG) traces can be challenging. We aimed to create an automated time series classification algorithm.\nEMGs of healthy controls (HC, n = 25), patients with amyotrophic lateral sclerosis (ALS, n = 20) and inclusion body myositis (IBM, n = 20), were retrospectively selected based on longitudinal clinical follow-up data (ALS and HC) or muscle biopsy (IBM). A machine learning pipeline was applied based on 5-second EMG fragments of each muscle. Diagnostic yield expressed as area under the curve (AUC) of a receiver-operator characteristics curve, accuracy, sensitivity, and specificity were determined per muscle (muscle-level) and per patient (patient-level).\nDiagnostic yield of the classification ALS vs. HC was: AUC 0.834 ± 0.014 at muscle-level and 0.856 ± 0.009 at patient-level. For the classification HC vs. IBM, AUC was 0.744 ± 0.043 at muscle-level and 0.735 ± 0.029 at patient-level. For the classification ALS vs. IBM, AUC was 0.569 ± 0.024 at muscle-level and 0.689 ± 0.035 at patient-level.\nAn automated time series classification algorithm can distinguish EMGs from healthy individuals from those of patients with ALS with a high diagnostic yield. Using longer EMG fragments with different levels of muscle activation may improve performance.\nIn the future, machine learning algorithms may help improve the diagnostic accuracy of EMG examinations. Show less
All tissue development and replenishment relies upon the breaking of symmetries leading to the morphological and operational differentiation of progenitor cells into more specialized cells. One of... Show moreAll tissue development and replenishment relies upon the breaking of symmetries leading to the morphological and operational differentiation of progenitor cells into more specialized cells. One of the main engines driving this process is the Notch signal transduction pathway, a ubiquitous signalling system found in the vast majority of metazoan cell types characterized to date. Broadly speaking, Notch receptor activity is governed by a balance between two processes: 1) intercellular Notch transactivation triggered via interactions between receptors and ligands expressed in neighbouring cells; 2) intracellular cis inhibition caused by ligands binding to receptors within the same cell. Additionally, recent reports have also unveiled evidence of cis activation. Whilst context-dependent Notch receptor clustering has been hypothesized, to date, Notch signalling has been assumed to involve an interplay between receptor and ligand monomers. In this study, we demonstrate biochemically, through a mutational analysis of DLL4, both in vitro and in tissue culture cells, that Notch ligands can efficiently self-associate. We found that the membrane proximal EGF-like repeat of DLL4 was necessary and sufficient to promote oligomerization/dimerization. Mechanistically, our experimental evidence supports the view that DLL4 ligand dimerization is specifically required for cis-inhibition of Notch receptor activity. To further substantiate these findings, we have adapted and extended existing ordinary differential equation-based models of Notch signalling to take account of the ligand dimerization-dependent cis-inhibition reported here. Our new model faithfully recapitulates our experimental data and improves predictions based upon published data. Collectively, our work favours a model in which net output following Notch receptor/ligand binding results from ligand monomer-driven Notch receptor transactivation (and cis activation) counterposed by ligand dimer-mediated cis-inhibition.Author summary The growth and maintenance of tissues is a fundamental characteristic of metazoan life, controlled by a highly conserved core of cell signal transduction networks. One such pathway, the Notch signalling system, plays a unique role in these phenomena by orchestrating the generation of the phenotypic and genetic asymmetries which underlie tissue development and remodeling. At the molecular level, it achieves this via two specific types of receptor/ligand interaction: intercellular binding of receptors and ligands expressed in neighbouring cells, which triggers receptor activation (transactivation); intracellular receptor/ligand binding within the same cell which blocks receptor activation (cis inhibition). Together, these counterposed mechanisms determine the strength, the direction and the specificity of Notch signalling output. Whilst, the basic mechanisms of receptor transactivation have been delineated in some detail, the precise nature of cis inhibition has remained enigmatic. Through a combination of experimental approaches and computational modelling, in this study, we present a new model of Notch signalling in which ligand monomers promote Notch receptor transactivation, whereas cis inhibition is induced optimally via ligand dimers. This is the first model to include a concrete molecular distinction, in terms of ligand configuration, between the main branches of Notch signalling. Our model faithfully recapitulates both our presented experimental results as well as the recently published work of others, and provides a novel perspective for understanding Notch-regulated biological processes such as embryo development and angiogenesis.Competing Interest StatementThe authors have declared no competing interest. Show less
Thirty years, 1993–2023, is a huge time frame in science. We address some major developments in the field of evolutionary algorithms, with applications in parameter optimization, over these 30... Show moreThirty years, 1993–2023, is a huge time frame in science. We address some major developments in the field of evolutionary algorithms, with applications in parameter optimization, over these 30 years. These include the covariance matrix adaptation evolution strategy and some fast-growing fields such as multimodal optimization, surrogate-assisted optimization, multiobjective optimization, and automated algorithm design. Moreover, we also discuss particle swarm optimization and differential evolution, which did not exist 30 years ago, either. One of the key arguments made in the paper is that we need fewer algorithms, not more, which, however, is the current trend through continuously claiming paradigms from nature that are suggested to be useful as new optimization algorithms. Moreover, we argue that we need proper benchmarking procedures to sort out whether a newly proposed algorithm is useful or not. We also briefly discuss automated algorithm design approaches, including configurable algorithm design frameworks, as the proposed next step toward designing optimization algorithms automatically, rather than by hand. Show less
ObjectiveDistinguishing normal, neuropathic and myopathic electromyography (EMG) traces can be challenging. We aimed to create an automated time series classification algorithm.MethodsEMGs of... Show moreObjectiveDistinguishing normal, neuropathic and myopathic electromyography (EMG) traces can be challenging. We aimed to create an automated time series classification algorithm.MethodsEMGs of healthy controls (HC, n = 25), patients with amyotrophic lateral sclerosis (ALS, n = 20) and inclusion body myositis (IBM, n = 20), were retrospectively selected based on longitudinal clinical follow-up data (ALS and HC) or muscle biopsy (IBM). A machine learning pipeline was applied based on 5-second EMG fragments of each muscle. Diagnostic yield expressed as area under the curve (AUC) of a receiver-operator characteristics curve, accuracy, sensitivity, and specificity were determined per muscle (muscle-level) and per patient (patient-level).ResultsDiagnostic yield of the classification ALS vs. HC was: AUC 0.834 ± 0.014 at muscle-level and 0.856 ± 0.009 at patient-level. For the classification HC vs. IBM, AUC was 0.744 ± 0.043 at muscle-level and 0.735 ± 0.029 at patient-level. For the classification ALS vs. IBM, AUC was 0.569 ± 0.024 at muscle-level and 0.689 ± 0.035 at patient-level.ConclusionsAn automated time series classification algorithm can distinguish EMGs from healthy individuals from those of patients with ALS with a high diagnostic yield. Using longer EMG fragments with different levels of muscle activation may improve performance. Show less
The landmark achievements of AlphaGo Zero have created great research interest into self-play in reinforcement learning. In self-play, Monte Carlo Tree Search (MCTS) is used to train a deep neural... Show moreThe landmark achievements of AlphaGo Zero have created great research interest into self-play in reinforcement learning. In self-play, Monte Carlo Tree Search (MCTS) is used to train a deep neural network, which is then used itself in tree searches. The training is governed by many hyper-parameters. There has been surprisingly little research on design choices for hyper-parameter values and loss functions, presumably because of the prohibitive computational cost to explore the parameter space. In this paper, we investigate 12 hyper-parameters in an AlphaZero-like self-play algorithm and evaluate how these parameters contribute to training. Through multi-objective analysis, we identify four important hyper-parameters to further assess. To start, we find surprising results where too much training can sometimes lead to lower performance. Our main result is that the number of self-play iterations subsumes MCTS-search simulations, game episodes and training epochs. As a consequence of our experiments, we provide recommendations on setting hyper-parameter values in self-play. The outer loop of self-play iterations should be emphasized, in favor of the inner loop. This means hyper-parameters for the inner loop, should be set to lower values. A secondary result of our experiments concerns the choice of optimization goals, for which we also provide recommendations. Show less
Sabatini, F.M.; Jiménez-Alfaro, B.; Jandt, U.; Chytrý, M.; Field, R.; Kessler, M.; ... ; Bruelheide, H. 2022
Global patterns of regional (gamma) plant diversity are relatively well known, but whether these patterns hold for local communities, and the dependence on spatial grain, remain controversial.... Show moreGlobal patterns of regional (gamma) plant diversity are relatively well known, but whether these patterns hold for local communities, and the dependence on spatial grain, remain controversial. Using data on 170,272 georeferenced local plant assemblages, we created global maps of alpha diversity (local species richness) for vascular plants at three different spatial grains, for forests and non-forests. We show that alpha diversity is consistently high across grains in some regions (for example, Andean-Amazonian foothills), but regional ‘scaling anomalies’ (deviations from the positive correlation) exist elsewhere, particularly in Eurasian temperate forests with disproportionally higher fine-grained richness and many African tropical forests with disproportionally higher coarse-grained richness. The influence of different climatic, topographic and biogeographical variables on alpha diversity also varies across grains. Our multi-grain maps return a nuanced understanding of vascular plant biodiversity patterns that complements classic maps of biodiversity hotspots and will improve predictions of global change effects on biodiversity. Show less
Real-world optimization scenarios under uncertainty and noise are typically handled with robust optimization techniques, which re-formulate the original optimization problem into a robust... Show moreReal-world optimization scenarios under uncertainty and noise are typically handled with robust optimization techniques, which re-formulate the original optimization problem into a robust counterpart, e.g., by taking an average of the function values over different perturbations to a specific input. Solving the robust counterpart instead of the original problem can significantly increase the associated computational cost, which is often overlooked in the literature to the best of our knowledge. Such an extra cost brought by robust optimization might depend on the problem landscape, the dimensionality, the severity of the uncertainty, and the formulation of the robust counterpart.This paper targets an empirical approach that evaluates and compares the computational cost brought by different robustness formulations in Kriging-based optimization on a wide combination (300 test cases) of problems, uncertainty levels, and dimensions. We mainly focus on the CPU time taken to find robust solutions, and choose five commonly-applied robustness formulations: `"mini-max robustness'', "mini-max regret robustness'', "expectation-based robustness'', ``dispersion-based robustness'', and "composite robustness'' respectively. We assess the empirical performance of these robustness formulations in terms of a fixed budget and a fixed target analysis, from which we find that "mini-max robustness'' is the most practical formulation w.r.t.~the associated computational cost. Show less
As combinatorial optimization is one of the main quantum computing applications, many methods based on parameterized quantum circuits are being developed. In general, a set of parameters are being... Show moreAs combinatorial optimization is one of the main quantum computing applications, many methods based on parameterized quantum circuits are being developed. In general, a set of parameters are being tweaked to optimize a cost function out of the quantum circuit output. One of these algorithms, the Quantum Approximate Optimization Algorithm stands out as a promising approach to tackling combinatorial problems. However, finding the appropriate parameters is a difficult task. Although QAOA exhibits concentration properties, they can depend on instances characteristics that may not be easy to identify, but may nonetheless offer useful information to find good parameters. In this work, we study unsupervised Machine Learning approaches for setting these parameters without optimization. We perform clustering with the angle values but also instances encodings (using instance features or the output of a variational graph autoencoder), and compare different approaches. These angle-finding strategies can be used to reduce calls to quantum circuits when leveraging QAOA as a subroutine. We showcase them within Recursive-QAOA up to depth 3 where the number of QAOA parameters used per iteration is limited to 3, achieving a median approximation ratio of 0.94 for MaxCut over 200 Erdős-Rényi graphs. We obtain similar performances to the case where we extensively optimize the angles, hence saving numerous circuit calls. Show less