Email updates

Keep up to date with the latest news and content from BMC Medical Research Methodology and BioMed Central.

Open Access Highly Accessed Debate

Quantifying errors without random sampling

Carl V Phillips1* and Luwanna M LaPole2

Author Affiliations

1 Management and Policy Sciences, University of Texas School of Public Health and Center for Clinical Research and Evidence Based Medicine, University of Texas Medical School Houston, Texas, USA

2 University of Minnesota School of Public Health, Minneapolis, Minnesota, USA

For all author emails, please log on.

BMC Medical Research Methodology 2003, 3:9  doi:10.1186/1471-2288-3-9

Published: 12 June 2003

Abstract

Background

All quantifications of mortality, morbidity, and other health measures involve numerous sources of error. The routine quantification of random sampling error makes it easy to forget that other sources of error can and should be quantified. When a quantification does not involve sampling, error is almost never quantified and results are often reported in ways that dramatically overstate their precision.

Discussion

We argue that the precision implicit in typical reporting is problematic and sketch methods for quantifying the various sources of error, building up from simple examples that can be solved analytically to more complex cases. There are straightforward ways to partially quantify the uncertainty surrounding a parameter that is not characterized by random sampling, such as limiting reported significant figures. We present simple methods for doing such quantifications, and for incorporating them into calculations. More complicated methods become necessary when multiple sources of uncertainty must be combined. We demonstrate that Monte Carlo simulation, using available software, can estimate the uncertainty resulting from complicated calculations with many sources of uncertainty. We apply the method to the current estimate of the annual incidence of foodborne illness in the United States.

Summary

Quantifying uncertainty from systematic errors is practical. Reporting this uncertainty would more honestly represent study results, help show the probability that estimated values fall within some critical range, and facilitate better targeting of further research.