Improving resource efficiency (RE) is an important objective of the Sustainable Development Goals. In this study we find a strong exponential relationship between economic complexity index (ECI)... Show moreImproving resource efficiency (RE) is an important objective of the Sustainable Development Goals. In this study we find a strong exponential relationship between economic complexity index (ECI) and RE of countries. ECI measures the level of accumulated knowledge of a society enabling the products it makes. The relationship between ECI and RE is stronger for primary material importers and countries with stable institutions. Assessing a country's level of ECI also allows the outlook of future RE trends. We explain how ECI influences RE at the product level by establishing the product space for each country and by defining core products that contribute to a high product complexity index, high RE (i.e., unit price) and promising expansibility (i.e., core number), which indicates the potential to produce more advanced products in the future. Policies that improve economic complexity and invest in core products seem to be a priority to achieve sustainable development. Show less
Liu, Y.; Linz, H.; Fang, M.; Henning, T.; Wolf, S.; Flock, M.; ... ; Li, D. 2022
Background and Aims In patients with acute liver failure (ALF) who suffer from massive hepatocyte loss, liver progenitor cells (LPCs) take over key hepatocyte functions, which ultimately determines... Show moreBackground and Aims In patients with acute liver failure (ALF) who suffer from massive hepatocyte loss, liver progenitor cells (LPCs) take over key hepatocyte functions, which ultimately determines survival. This study investigated how the expression of hepatocyte nuclear factor 4 alpha (HNF4 alpha), its regulators, and targets in LPCs determines clinical outcome of patients with ALF. Approach and Results Clinicopathological associations were scrutinized in 19 patients with ALF (9 recovered and 10 receiving liver transplantation). Regulatory mechanisms between follistatin, activin, HNF4 alpha, and coagulation factor expression in LPC were investigated in vitro and in metronidazole-treated zebrafish. A prospective clinical study followed up 186 patients with cirrhosis for 80 months to observe the relevance of follistatin levels in prevalence and mortality of acute-on-chronic liver failure. Recovered patients with ALF robustly express HNF4 alpha in either LPCs or remaining hepatocytes. As in hepatocytes, HNF4 alpha controls the expression of coagulation factors by binding to their promoters in LPC. HNF4 alpha expression in LPCs requires the forkhead box protein H1-Sma and Mad homolog 2/3/4 transcription factor complex, which is promoted by the TGF-beta superfamily member activin. Activin signaling in LPCs is negatively regulated by follistatin, a hepatocyte-derived hormone controlled by insulin and glucagon. In contrast to patients requiring liver transplantation, recovered patients demonstrate a normal activin/follistatin ratio, robust abundance of the activin effectors phosphorylated Sma and Mad homolog 2 and HNF4 alpha in LPCs, leading to significantly improved coagulation function. A follow-up study indicated that serum follistatin levels could predict the incidence and mortality of acute-on-chronic liver failure. Conclusions These results highlight a crucial role of the follistatin-controlled activin-HNF4 alpha-coagulation axis in determining the clinical outcome of massive hepatocyte loss-induced ALF. The effects of insulin and glucagon on follistatin suggest a key role of the systemic metabolic state in ALF. Show less
In deep reinforcement learning, searching and learning techniques are two important components. They can be used independently and in combination to deal with different problems in AI. These... Show moreIn deep reinforcement learning, searching and learning techniques are two important components. They can be used independently and in combination to deal with different problems in AI. These results have inspired research into artificial general intelligence (AGI).We study table based classic Q-learning on the General Game Playing (GGP) system, showing that classic Q-learning works on GGP, although convergence is slow, and it is computationally expensive to learn complex games.This dissertation uses an AlphaZero-like self-play framework to explore AGI on small games. By tuning different hyper-parameters, the role, effects and contributions of searching and learning are studied. A further experiment shows that search techniques can contribute as experts to generate better training examples to speed up the start phase of training.In order to extend the AlphaZero-likeself-play approach to single player complex games, the Morpion Solitaire game is implemented by combining Ranked Reward method. Our first AlphaZero-based approach is able to achieve a near human best record.Overall, in this thesis, both searching and learning techniques are studied (by themselves and in combination) in GGP and AlphaZero-like self-play systems. We do so for the purpose of making steps towards artificial general intelligence, towards systems that exhibit intelligent behavior in more than one domain. Show less
Introducing new algorithmic ideas is a key part of the continuous improvement of existing optimization algorithms. However, when introducing a new component into an existing algorithm, assessing... Show moreIntroducing new algorithmic ideas is a key part of the continuous improvement of existing optimization algorithms. However, when introducing a new component into an existing algorithm, assessing its potential benefits is a challenging task. Often, the component is added to a default implementation of the underlying algorithm and compared against a limited set of other variants. This assessment ignores any potential interplay with other algorithmic ideas that share the same base algorithm, which is critical in understanding the exact contributions being made. We explore a more extensive procedure, which uses hyperparameter tuning as a means of assessing the benefits of new algorithmic components. This allows for a more robust analysis by not only focusing on the impact on performance, but also by investigating how this performance is achieved. We implement our suggestion in the context of the Modular CMA-ES framework, which was redesigned and extended to include some new modules and several new options for existing modules, mostly focused on the step-size adaptation method. Our analysis highlights the differences between these new modules, and identifies the situations in which they have the largest contribution. Show less
Structural Bias (SB) is an important type of algorithmic deficiency within iterative optimisation heuristics. However, methods for detecting structural bias have not yet fully matured, and recent... Show moreStructural Bias (SB) is an important type of algorithmic deficiency within iterative optimisation heuristics. However, methods for detecting structural bias have not yet fully matured, and recent studies have uncovered many interesting questions. One of these is the question of how structural bias can be related to anisotropy. Intuitively, an algorithm that is not isotropic would be considered structurally biased. However, there have been cases where algorithms appear to only show SB in some dimensions. As such, we investigate whether these algorithms actually exhibit anisotropy, and how this impacts the detection of SB. We find that anisotropy is very rare, and even in cases where it is present, there are clear tests for SB which do not rely on any assumptions of isotropy, so we can safely expand the suite of SB tests to encompass these kinds of deficiencies not found by the original tests.We propose several additional testing procedures for SB detection and aim to motivate further research into the creation of a robust portfolio of tests. This is crucial since no single test will be able to work effectively with all types of SB we identify. Show less
A new acquisition function is proposed for solving robust optimization problems via Bayesian Optimization. The proposed acquisition function reflects the need for the robust instead of the nominal... Show moreA new acquisition function is proposed for solving robust optimization problems via Bayesian Optimization. The proposed acquisition function reflects the need for the robust instead of the nominal optimum, and is based on the intuition of utilizing the higher moments of the improvement. The efficacy of Bayesian Optimization based on this acquisition function is demonstrated on four test problems, each affected by three different levels of noise. Our findings suggest the promising nature of the proposed acquisition function as it yields a better robust optimal value of the function in 6/12 test scenarios when compared with the baseline. Show less
Background Subthalamic deep brain stimulation (STN DBS) may relieve refractory motor complications in Parkinson's disease (PD) patients. Despite careful screening, it remains difficult to determine... Show moreBackground Subthalamic deep brain stimulation (STN DBS) may relieve refractory motor complications in Parkinson's disease (PD) patients. Despite careful screening, it remains difficult to determine severity of alpha-synucleinopathy involvement which influences the risk of postoperative complications including cognitive deterioration. Quantitative electroencephalography (qEEG) reflects cognitive dysfunction in PD and may provide biomarkers of postoperative cognitive decline.Objective To develop an automated machine learning model based on preoperative EEG data to predict cognitive deterioration 1 year after STN DBS.Methods Sixty DBS candidates were included; 42 patients had available preoperative EEGs to compute a fully automated machine learning model. Movement Disorder Society criteria classified patients as cognitively stable or deteriorated at 1-year follow-up. A total of 16,674 EEG-features were extracted per patient; a Boruta algorithm selected EEG-features to reflect representative neurophysiological signatures for each class. A random forest classifier with 10-fold cross-validation with Bayesian optimization provided class-differentiation.Results Tweny-five patients were classified as cognitively stable and 17 patients demonstrated cognitive decline. The model differentiated classes with a mean (SD) accuracy of 0.88 (0.05), with a positive predictive value of 91.4% (95% CI 82.9, 95.9) and negative predictive value of 85.0% (95% CI 81.9, 91.4). Predicted probabilities between classes were highly differential (hazard ratio 11.14 [95% CI 7.25, 17.12]); the risk of cognitive decline in patients with high probabilities of being prognosticated as cognitively stable (>0.5) was very limited.Conclusions Preoperative EEGs can predict cognitive deterioration after STN DBS with high accuracy. Cortical neurophysiological alterations may indicate future cognitive decline and can be used as biomarkers during the DBS screening. (c) 2021 The Authors. Movement Disorders published by Wiley Periodicals LLC on behalf of International Parkinson and Movement Disorder Society Show less
Objective: A downside of Deep Brain Stimulation (DBS) for Parkinson's Disease (PD) is that cognitive function may deteriorate postoperatively. Electroencephalography (EEG) was explored as biomarker... Show moreObjective: A downside of Deep Brain Stimulation (DBS) for Parkinson's Disease (PD) is that cognitive function may deteriorate postoperatively. Electroencephalography (EEG) was explored as biomarker of cognition using a Machine Learning (ML) pipeline.Methods: A fully automated ML pipeline was applied to 112 PD patients, taking EEG time-series as input and predicted class-labels as output. The most extreme cognitive scores were selected for class differentiation, i.e. best vs. worst cognitive performance (n = 20 per group). 16,674 features were extracted per patient; feature-selection was performed using a Boruta algorithm. A random forest classifier was modelled; 10-fold cross-validation with Bayesian optimization was performed to ensure generalizability. The predicted class-probabilities of the entire cohort were compared to actual cognitive performance.Results: Both groups were differentiated with a mean accuracy of 0.92; using only occipital peak frequency yielded an accuracy of 0.67. Class-probabilities and actual cognitive performance were negatively linearly correlated (b =-0.23 (95% confidence interval (-0.29,-0.18))).Conclusions: Particularly high accuracies were achieved using a compound of automatically extracted EEG biomarkers to classify PD patients according to cognition, rather than a single spectral EEG feature.Significance: Automated EEG assessment may have utility for cognitive profiling of PD patients during the DBS screening. (c) 2021 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). Show less
Wang, H.; Li, Z.; Tong, H.; Kolfschoten, M. van 2021
The imbalanced classification problem is very relevant in both academic and industrial applications. The task of finding the best machine learning model to use for a specific imbalanced dataset is... Show moreThe imbalanced classification problem is very relevant in both academic and industrial applications. The task of finding the best machine learning model to use for a specific imbalanced dataset is complicated due to a large number of existing algorithms, each with its own hyperparameters. The Combined Algorithm Selection and Hyperparameter optimization (CASH) has been introduced to tackle both aspects at the same time. However, CASH has not been studied in detail in the class imbalance domain, where the best combination of resampling technique and classification algorithm is searched for, together with their optimized hyperparameters. Thus, we target the CASH problem for imbalanced classification. We experiment with a search space of 5 classification algorithms, 21 resampling approaches and 64 relevant hyperparameters in total. Moreover, we investigate performance of 2 well-known optimization approaches: Random search and Tree Parzen Estimators approach which is a kind of Bayesian optimization. For comparison, we also perform grid search on all combinations of resampling techniques and classification algorithms with their default hyperparameters. Our experimental results show that a Bayesian optimization approach outperforms the other approaches for CASH in this application domain. Show less