Background: While clinical prediction models (CPMs) are used increasingly commonly to guide patient care, the performance and clinical utility of these CPMs in new patient cohorts is poorly... Show moreBackground: While clinical prediction models (CPMs) are used increasingly commonly to guide patient care, the performance and clinical utility of these CPMs in new patient cohorts is poorly understood. Methods: We performed 158 external validations of 104 unique CPMs across 3 domains of cardiovascular disease (primary prevention, acute coronary syndrome, and heart failure). Validations were performed in publicly available clinical trial cohorts and model performance was assessed using measures of discrimination, calibration, and net benefit. To explore potential reasons for poor model performance, CPM-clinical trial cohort pairs were stratified based on relatedness, a domain-specific set of characteristics to qualitatively grade the similarity of derivation and validation patient populations. We also examined the model-based C-statistic to assess whether changes in discrimination were because of differences in case-mix between the derivation and validation samples. The impact of model updating on model performance was also assessed. Results: Discrimination decreased significantly between model derivation (0.76 [interquartile range 0.73-0.78]) and validation (0.64 [interquartile range 0.60-0.67], P<0.001), but approximately half of this decrease was because of narrower case-mix in the validation samples. CPMs had better discrimination when tested in related compared with distantly related trial cohorts. Calibration slope was also significantly higher in related trial cohorts (0.77 [interquartile range, 0.59-0.90]) than distantly related cohorts (0.59 [interquartile range 0.43-0.73], P=0.001). When considering the full range of possible decision thresholds between half and twice the outcome incidence, 91% of models had a risk of harm (net benefit below default strategy) at some threshold; this risk could be reduced substantially via updating model intercept, calibration slope, or complete re-estimation. Conclusions: There are significant decreases in model performance when applying cardiovascular disease CPMs to new patient populations, resulting in substantial risk of harm. Model updating can mitigate these risks. Care should be taken when using CPMs to guide clinical decision-making. Show less
Objective: To assess whether the Prediction model Risk Of Bias ASsessment Tool (PROBAST) and a shorter version of this tool can identify clinical prediction models (CPMs) that perform poorly at... Show moreObjective: To assess whether the Prediction model Risk Of Bias ASsessment Tool (PROBAST) and a shorter version of this tool can identify clinical prediction models (CPMs) that perform poorly at external validation. Study Design and Setting: We evaluated risk of bias (ROB) on 102 CPMs from the Tufts CPM Registry, comparing PROBAST to a short form consisting of six PROBAST items anticipated to best identify high ROB. We then applied the short form to all CPMs in the Registry with at least 1 validation (n = 556) and assessed the change in discrimination (dAUC) in external validation cohorts (n = 1,147). Results: PROBAST classified 98/102 CPMS as high ROB. The short form identified 96 of these 98 as high ROB (98% sensitivity), with perfect specificity. In the full CPM registry, 527 of 556 CPMs (95%) were classified as high ROB, 20 (3.6%) low ROB, and 9 (1.6%) unclear ROB. Only one model with unclear ROB was reclassified to high ROB after full PROBAST assessment of all low and unclear ROB models. Median change in discrimination was significantly smaller in low ROB models (dAUC -0.9%, IQR -6.2-4.2%) compared to high ROB models (dAUC -11.7%, IQR -33.3-2.6%; P < 0.001). Conclusion: High ROB is pervasive among published CPMs. It is associated with poor discriminative performance at validation, supporting the application of PROBAST or a shorter version in CPM reviews. (c) 2021 The Authors. Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license ( http:// creativecommons.org/ licenses/ by- nc- nd/ 4.0/ ) Show less
Background: There are many clinical prediction models (CPMs) available to inform treatment decisions for patients with cardiovascular disease. However, the extent to which they have been externally... Show moreBackground: There are many clinical prediction models (CPMs) available to inform treatment decisions for patients with cardiovascular disease. However, the extent to which they have been externally tested, and how well they generally perform has not been broadly evaluated. Methods: A SCOPUS citation search was run on March 22, 2017 to identify external validations of cardiovascular CPMs in the Tufts Predictive Analytics and Comparative Effectiveness CPM Registry. We assessed the extent of external validation, performance heterogeneity across databases, and explored factors associated with model performance, including a global assessment of the clinical relatedness between the derivation and validation data. Results: We identified 2030 external validations of 1382 CPMs. Eight hundred seven (58%) of the CPMs in the Registry have never been externally validated. On average, there were 1.5 validations per CPM (range, 0-94). The median external validation area under the receiver operating characteristic curve was 0.73 (25th-75th percentile [interquartile range (IQR)], 0.66-0.79), representing a median percent decrease in discrimination of -11.1% (IQR, -32.4% to +2.7%) compared with performance on derivation data. 81% (n=1333) of validations reporting area under the receiver operating characteristic curve showed discrimination below that reported in the derivation dataset. 53% (n=983) of the validations report some measure of CPM calibration. For CPMs evaluated more than once, there was typically a large range of performance. Of 1702 validations classified by relatedness, the percent change in discrimination was -3.7% (IQR, -13.2 to 3.1) for closely related validations (n=123), -9.0 (IQR, -27.6 to 3.9) for related validations (n=862), and -17.2% (IQR, -42.3 to 0) for distantly related validations (n=717; P<0.001). Conclusions: Many published cardiovascular CPMs have never been externally validated, and for those that have, apparent performance during development is often overly optimistic. A single external validation appears insufficient to broadly understand the performance heterogeneity across different settings. Show less
Terkourafi, M.; Catedral, L.; Haider, I.; Karimzad, F.; Melgares, J.; Mostacero, C.; ... ; Weissman, B. 2018
Gaucher disease is caused by inherited deficiency of lysosomal glucocerebrosidase. Proteome analysis of laser-dissected splenic Gaucher cells revealed increased amounts of glycoprotein... Show moreGaucher disease is caused by inherited deficiency of lysosomal glucocerebrosidase. Proteome analysis of laser-dissected splenic Gaucher cells revealed increased amounts of glycoprotein nonmetastatic melanoma protein B (gpNMB). Plasma gpNMB was also elevated, correlating with chitotriosidase and CCL18, which are established markers for human Gaucher cells. In Gaucher mice, gpNMB is also produced by Gaucher cells. Correction of glucocerebrosidase deficiency in mice by gene transfer or pharmacological substrate reduction reverses gpNMB abnormalities. In conclusion, gpNMB acts as a marker for glucosylceramide-laden macrophages in man and mouse and gpNMB should be considered as candidate biomarker for Gaucher disease in treatment monitoring. Show less