Background: The coronavirus disease 2019 (COVID-19) presents an urgent threat to global health. Prediction models that accurately estimate mortality risk in hospitalized patients could assist... Show moreBackground: The coronavirus disease 2019 (COVID-19) presents an urgent threat to global health. Prediction models that accurately estimate mortality risk in hospitalized patients could assist medical staff in treatment and allocating limited resources. Aims: To externally validate two promising previously published risk scores that predict in-hospital mortality among hospitalized COVID-19 patients. Methods: Two prospective cohorts were available; a cohort of 1028 patients admitted to one of nine hospitals in Lombardy, Italy (the Lombardy cohort) and a cohort of 432 patients admitted to a hospital in Leiden, the Netherlands (the Leiden cohort). The endpoint was in-hospital mortality. All patients were adult and testedCOVID-19 PCR-positive. Model discrimination and calibration were assessed. Results: The C-statistic of the 4C mortality score was good in the Lombardy cohort (0.85, 95CI: 0.82-0.89) and in the Leiden cohort (0.87, 95CI: 0.80-0.94). Model calibration was acceptable in the Lombardy cohort but poor in the Leiden cohort due to the model systematically overpredicting the mortality risk for all patients. The C -sta-tistic of the CURB-65 score was good in the Lombardy cohort (0.80, 95CI: 0.75-0.85) and in the Leiden cohort (0.82, 95CI: 0.76-0.88). The mortality rate in the CURB-65 development cohort was much lower than the mortality rate in the Lombardy cohort. A similar but less pronounced trend was found for patients in the Leiden cohort. Conclusion: Although performances did not differ greatly, the 4C mortality score showed the best performance. However, because of quickly changing circumstances, model recalibration may be necessary before using the 4C mortality score. Show less
Ramspek, C.L.; Teece, L.; Snell, K.I.E.; Evans, M.; Riley, R.D.; Smeden, M. van; ... ; Diepen, M. van 2021
Background: External validation of prognostic models is necessary to assess the accuracy and generalizability of the model to new patients. If models are validated in a setting in which competing... Show moreBackground: External validation of prognostic models is necessary to assess the accuracy and generalizability of the model to new patients. If models are validated in a setting in which competing events occur, these competing risks should be accounted for when comparing predicted risks to observed outcomes. Methods: We discuss existing measures of calibration and discrimination that incorporate competing events for time-to-event models. These methods are illustrated using a clinical-data example concerning the prediction of kidney failure in a population with advanced chronic kidney disease (CKD), using the guideline-recommended Kidney Failure Risk Equation (KFRE). The KFRE was developed using Cox regression in a diverse population of CKD patients and has been proposed for use in patients with advanced CKD in whom death is a frequent competing event. Results: When validating the 5-year KFRE with methods that account for competing events, it becomes apparent that the 5-year KFRE considerably overestimates the real-world risk of kidney failure. The absolute overestimation was 10%age points on average and 29%age points in older high-risk patients. Conclusions: It is crucial that competing events are accounted for during external validation to provide a more reliable assessment the performance of a model in clinical settings in which competing risks occur. Show less
Ramspek, C.L.; Steyerberg, E.W.; Riley, R.D.; Rosendaal, F.R.; Dekkers, O.M.; Dekker, F.W.; Diepen, M. van 2021
Etiological research aims to uncover causal effects, whilst prediction research aims to forecast an outcome with the best accuracy. Causal and prediction research usually require different methods,... Show moreEtiological research aims to uncover causal effects, whilst prediction research aims to forecast an outcome with the best accuracy. Causal and prediction research usually require different methods, and yet their findings may get conflated when reported and interpreted. The aim of the current study is to quantify the frequency of conflation between etiological and prediction research, to discuss common underlying mistakes and provide recommendations on how to avoid these. Observational cohort studies published in January 2018 in the top-ranked journals of six distinct medical fields (Cardiology, Clinical Epidemiology, Clinical Neurology, General and Internal Medicine, Nephrology and Surgery) were included for the current scoping review. Data on conflation was extracted through signaling questions. In total, 180 studies were included. Overall, 26% (n = 46) contained conflation between etiology and prediction. The frequency of conflation varied across medical field and journal impact factor. From the causal studies 22% was conflated, mainly due to the selection of covariates based on their ability to predict without taking the causal structure into account. Within prediction studies 38% was conflated, the most frequent reason was a causal interpretation of covariates included in a prediction model. Conflation of etiology and prediction is a common methodological error in observational medical research and more frequent in prediction studies. As this may lead to biased estimations and erroneous conclusions, researchers must be careful when designing, interpreting and disseminating their research to ensure this conflation is avoided. Show less