Background: Timely identification of deteriorating COVID-19 patients is needed to guide changes in clinical management and admission to intensive care units (ICUs). There is significant concern... Show moreBackground: Timely identification of deteriorating COVID-19 patients is needed to guide changes in clinical management and admission to intensive care units (ICUs). There is significant concern that widely used Early warning scores (EWSs) underestimate illness severity in COVID-19 patients and therefore, we developed an early warning model specifically for COVID-19 patients. Methods: We retrospectively collected electronic medical record data to extract predictors and used these to fit a random forest model. To simulate the situation in which the model would have been developed after the first and implemented during the second COVID-19 `wave' in the Netherlands, we performed a temporal validation by splitting all included patients into groups admitted before and after August 1, 2020. Furthermore, we propose a method for dynamic model updating to retain model performance over time. We evaluated model discrimination and calibration, performed a decision curve analysis, and quantified the importance of predictors using SHapley Additive exPlanations values. Results: We included 3514 COVID-19 patient admissions from six Dutch hospitals between February 2020 and May 2021, and included a total of 18 predictors for model fitting. The model showed a higher discriminative performance in terms of partial area under the receiver operating characteristic curve (0.82 [0.80-0.84]) compared to the National early warning score (0.72 [0.69-0.74]) and the Modified early warning score (0.67 [0.65-0.69]), a greater net benefit over a range of clinically relevant model thresholds, and relatively good calibration (intercept = 0.03 [- 0.09 to 0.14], slope = 0.79 [0.73-0.86]). Conclusions: This study shows the potential benefit of moving from early warning models for the general inpatient population to models for specific patient groups. Further (independent) validation of the model is needed. Show less
Taskesen, E.; Huisman, S.M.H.; Mahfouz, A.; Krijthe, J.H.; Ridder, J. de; Stolpe, A. van de; ... ; Reinders, M.J.T. 2018
In many domains of science and society, the amount of data being gathered is increasing rapidly. To estimate input-output relationships that are often of interest, supervised learning techniques... Show moreIn many domains of science and society, the amount of data being gathered is increasing rapidly. To estimate input-output relationships that are often of interest, supervised learning techniques rely on a specific type of data: labeled examples for which we know both the input and an outcome. The problem of semi-supervised learning is how to use, increasingly abundantly available, unlabeled examples, with unknown outcomes, to improve supervised learning methods. This thesis is concerned with the question if and how these improvements are possible in a "robust", or safe, way: can we guarantee these methods do not lead to worse performance than the supervised solution?We show that for some supervised classifiers, most notably, the least squares classifier, semi-supervised adaptations can be constructed where this non-degradation in performance can indeed be guaranteed, in terms of the surrogate loss used by the classifier. Since these guarantees are given in terms of the surrogate loss, we explore why this is a useful criterion to evaluate performance. We then prove that semi-supervised versions with strict non-degradation guarantees are not possible for a large class of commonly used supervised classifiers. Other aspects covered in the thesis include optimistic learning, the peaking phenomenon and reproducibility. Show less