Optimal classifier selection and negative bias in error rate estimation: an empirical study on high-dimensional prediction
1 Department of Statistics, University of Munich, Ludwigstr 33, D-80539 Munich, Germany
2 Sylvia Lawry Centre for Multiple Sclerosis Research, Hohenlindenerstr 1, D-81677 Munich, Germany
3 Department of Medical Informatics, Biometry and Epidemiology, University of Munich, Marchioninistr 15, D-81377 Munich, Germany
BMC Medical Research Methodology 2009, 9:85 doi:10.1186/1471-2288-9-85Published: 21 December 2009
In biometric practice, researchers often apply a large number of different methods in a "trial-and-error" strategy to get as much as possible out of their data and, due to publication pressure or pressure from the consulting customer, present only the most favorable results. This strategy may induce a substantial optimistic bias in prediction error estimation, which is quantitatively assessed in the present manuscript. The focus of our work is on class prediction based on high-dimensional data (e.g. microarray data), since such analyses are particularly exposed to this kind of bias.
In our study we consider a total of 124 variants of classifiers (possibly including variable selection or tuning steps) within a cross-validation evaluation scheme. The classifiers are applied to original and modified real microarray data sets, some of which are obtained by randomly permuting the class labels to mimic non-informative predictors while preserving their correlation structure.
We assess the minimal misclassification rate over the different variants of classifiers in order to quantify the bias arising when the optimal classifier is selected a posteriori in a data-driven manner. The bias resulting from the parameter tuning (including gene selection parameters as a special case) and the bias resulting from the choice of the classification method are examined both separately and jointly.
The median minimal error rate over the investigated classifiers was as low as 31% and 41% based on permuted uninformative predictors from studies on colon cancer and prostate cancer, respectively. We conclude that the strategy to present only the optimal result is not acceptable because it yields a substantial bias in error rate estimation, and suggest alternative approaches for properly reporting classification accuracy.