ObjectiveDistinguishing normal, neuropathic and myopathic electromyography (EMG) traces can be challenging. We aimed to create an automated time series classification algorithm.MethodsEMGs of... Show moreObjectiveDistinguishing normal, neuropathic and myopathic electromyography (EMG) traces can be challenging. We aimed to create an automated time series classification algorithm.MethodsEMGs of healthy controls (HC, n = 25), patients with amyotrophic lateral sclerosis (ALS, n = 20) and inclusion body myositis (IBM, n = 20), were retrospectively selected based on longitudinal clinical follow-up data (ALS and HC) or muscle biopsy (IBM). A machine learning pipeline was applied based on 5-second EMG fragments of each muscle. Diagnostic yield expressed as area under the curve (AUC) of a receiver-operator characteristics curve, accuracy, sensitivity, and specificity were determined per muscle (muscle-level) and per patient (patient-level).ResultsDiagnostic yield of the classification ALS vs. HC was: AUC 0.834 ± 0.014 at muscle-level and 0.856 ± 0.009 at patient-level. For the classification HC vs. IBM, AUC was 0.744 ± 0.043 at muscle-level and 0.735 ± 0.029 at patient-level. For the classification ALS vs. IBM, AUC was 0.569 ± 0.024 at muscle-level and 0.689 ± 0.035 at patient-level.ConclusionsAn automated time series classification algorithm can distinguish EMGs from healthy individuals from those of patients with ALS with a high diagnostic yield. Using longer EMG fragments with different levels of muscle activation may improve performance. Show less
The landmark achievements of AlphaGo Zero have created great research interest into self-play in reinforcement learning. In self-play, Monte Carlo Tree Search (MCTS) is used to train a deep neural... Show moreThe landmark achievements of AlphaGo Zero have created great research interest into self-play in reinforcement learning. In self-play, Monte Carlo Tree Search (MCTS) is used to train a deep neural network, which is then used itself in tree searches. The training is governed by many hyper-parameters. There has been surprisingly little research on design choices for hyper-parameter values and loss functions, presumably because of the prohibitive computational cost to explore the parameter space. In this paper, we investigate 12 hyper-parameters in an AlphaZero-like self-play algorithm and evaluate how these parameters contribute to training. Through multi-objective analysis, we identify four important hyper-parameters to further assess. To start, we find surprising results where too much training can sometimes lead to lower performance. Our main result is that the number of self-play iterations subsumes MCTS-search simulations, game episodes and training epochs. As a consequence of our experiments, we provide recommendations on setting hyper-parameter values in self-play. The outer loop of self-play iterations should be emphasized, in favor of the inner loop. This means hyper-parameters for the inner loop, should be set to lower values. A secondary result of our experiments concerns the choice of optimization goals, for which we also provide recommendations. Show less
Sabatini, F.M.; Jiménez-Alfaro, B.; Jandt, U.; Chytrý, M.; Field, R.; Kessler, M.; ... ; Bruelheide, H. 2022
Global patterns of regional (gamma) plant diversity are relatively well known, but whether these patterns hold for local communities, and the dependence on spatial grain, remain controversial.... Show moreGlobal patterns of regional (gamma) plant diversity are relatively well known, but whether these patterns hold for local communities, and the dependence on spatial grain, remain controversial. Using data on 170,272 georeferenced local plant assemblages, we created global maps of alpha diversity (local species richness) for vascular plants at three different spatial grains, for forests and non-forests. We show that alpha diversity is consistently high across grains in some regions (for example, Andean-Amazonian foothills), but regional ‘scaling anomalies’ (deviations from the positive correlation) exist elsewhere, particularly in Eurasian temperate forests with disproportionally higher fine-grained richness and many African tropical forests with disproportionally higher coarse-grained richness. The influence of different climatic, topographic and biogeographical variables on alpha diversity also varies across grains. Our multi-grain maps return a nuanced understanding of vascular plant biodiversity patterns that complements classic maps of biodiversity hotspots and will improve predictions of global change effects on biodiversity. Show less
Real-world optimization scenarios under uncertainty and noise are typically handled with robust optimization techniques, which re-formulate the original optimization problem into a robust... Show moreReal-world optimization scenarios under uncertainty and noise are typically handled with robust optimization techniques, which re-formulate the original optimization problem into a robust counterpart, e.g., by taking an average of the function values over different perturbations to a specific input. Solving the robust counterpart instead of the original problem can significantly increase the associated computational cost, which is often overlooked in the literature to the best of our knowledge. Such an extra cost brought by robust optimization might depend on the problem landscape, the dimensionality, the severity of the uncertainty, and the formulation of the robust counterpart.This paper targets an empirical approach that evaluates and compares the computational cost brought by different robustness formulations in Kriging-based optimization on a wide combination (300 test cases) of problems, uncertainty levels, and dimensions. We mainly focus on the CPU time taken to find robust solutions, and choose five commonly-applied robustness formulations: `"mini-max robustness'', "mini-max regret robustness'', "expectation-based robustness'', ``dispersion-based robustness'', and "composite robustness'' respectively. We assess the empirical performance of these robustness formulations in terms of a fixed budget and a fixed target analysis, from which we find that "mini-max robustness'' is the most practical formulation w.r.t.~the associated computational cost. Show less
As combinatorial optimization is one of the main quantum computing applications, many methods based on parameterized quantum circuits are being developed. In general, a set of parameters are being... Show moreAs combinatorial optimization is one of the main quantum computing applications, many methods based on parameterized quantum circuits are being developed. In general, a set of parameters are being tweaked to optimize a cost function out of the quantum circuit output. One of these algorithms, the Quantum Approximate Optimization Algorithm stands out as a promising approach to tackling combinatorial problems. However, finding the appropriate parameters is a difficult task. Although QAOA exhibits concentration properties, they can depend on instances characteristics that may not be easy to identify, but may nonetheless offer useful information to find good parameters. In this work, we study unsupervised Machine Learning approaches for setting these parameters without optimization. We perform clustering with the angle values but also instances encodings (using instance features or the output of a variational graph autoencoder), and compare different approaches. These angle-finding strategies can be used to reduce calls to quantum circuits when leveraging QAOA as a subroutine. We showcase them within Recursive-QAOA up to depth 3 where the number of QAOA parameters used per iteration is limited to 3, achieving a median approximation ratio of 0.94 for MaxCut over 200 Erdős-Rényi graphs. We obtain similar performances to the case where we extensively optimize the angles, hence saving numerous circuit calls. Show less
Improving resource efficiency (RE) is an important objective of the Sustainable Development Goals. In this study we find a strong exponential relationship between economic complexity index (ECI)... Show moreImproving resource efficiency (RE) is an important objective of the Sustainable Development Goals. In this study we find a strong exponential relationship between economic complexity index (ECI) and RE of countries. ECI measures the level of accumulated knowledge of a society enabling the products it makes. The relationship between ECI and RE is stronger for primary material importers and countries with stable institutions. Assessing a country's level of ECI also allows the outlook of future RE trends. We explain how ECI influences RE at the product level by establishing the product space for each country and by defining core products that contribute to a high product complexity index, high RE (i.e., unit price) and promising expansibility (i.e., core number), which indicates the potential to produce more advanced products in the future. Policies that improve economic complexity and invest in core products seem to be a priority to achieve sustainable development. Show less
Liu, Y.; Linz, H.; Fang, M.; Henning, T.; Wolf, S.; Flock, M.; ... ; Li, D. 2022