Improving prediction models with new markers: A comparison of updating strategies

D. Nieboer, Y. Vergouwe, Danna P. Ankerst, Monique J. Roobol, Ewout W. Steyerberg

Research output: Contribution to journalArticlepeer-review

6 Scopus citations

Abstract

Background: New markers hold the promise of improving risk prediction for individual patients. We aimed to compare the performance of different strategies to extend a previously developed prediction model with a new marker. Methods: Our motivating example was the extension of a risk calculator for prostate cancer with a new marker that was available in a relatively small dataset. Performance of the strategies was also investigated in simulations. Development, marker and test sets with different sample sizes originating from the same underlying population were generated. A prediction model was fitted using logistic regression in the development set, extended using the marker set and validated in the test set. Extension strategies considered were re-estimating individual regression coefficients, updating of predictions using conditional likelihood ratios (LR) and imputation of marker values in the development set and subsequently fitting a model in the combined development and marker sets. Sample sizes considered for the development and marker set were 500 and 100, 500 and 500, and 100 and 500 patients. Discriminative ability of the extended models was quantified using the concordance statistic (c-statistic) and calibration was quantified using the calibration slope. Results: All strategies led to extended models with increased discrimination (c-statistic increase from 0.75 to 0.80 in test sets). Strategies estimating a large number of parameters (re-estimation of all coefficients and updating using conditional LR) led to overfitting (calibration slope below 1). Parsimonious methods, limiting the number of coefficients to be re-estimated, or applying shrinkage after model revision, limited the amount of overfitting. Combining the development and marker set using imputation of missing marker values approach led to consistently good performing models in all scenarios. Similar results were observed in the motivating example. Conclusion: When the sample with the new marker information is small, parsimonious methods are required to prevent overfitting of a new prediction model. Combining all data with imputation of missing marker values is an attractive option, even if a relatively large marker data set is available.

Original languageEnglish (US)
Article number128
JournalBMC Medical Research Methodology
Volume16
Issue number1
DOIs
StatePublished - Sep 27 2016

Keywords

  • Logistic regression
  • Model updating
  • Prediction model
  • Prostate cancer

ASJC Scopus subject areas

  • Epidemiology
  • Health Informatics

Fingerprint Dive into the research topics of 'Improving prediction models with new markers: A comparison of updating strategies'. Together they form a unique fingerprint.

Cite this