How well do clinical prediction rules perform in identifying serious infections in acutely ill children across an international network of ambulatory care datasets?
1 Department of General Practice, KU Leuven, Kapucijnenvoer 33 blok J, 3000 Leuven, Belgium
2 Department of Primary Care Health Sciences, University of Oxford, New Radcliffe House, Woodstock Road, Oxford, OX2 6GG, UK
3 Erasmus MC - Sophia's Children Hospital, Dr Molewaterplein 60, 3015 GJ Rotterdam, The Netherlands
4 Department of General Practice, University Medical Center Groningen, Hanze plein 1 Box 30001 9700RB Groningen, The Netherlands
5 Department of General and Adolescent Paediatrics, University College London, Institute of Child Health, London, UK
6 Research Institute Caphri, Maastricht University, PB 313, Nl 6200 MD, Maastricht, The Netherlands
Citation and License
BMC Medicine 2013, 11:10 doi:10.1186/1741-7015-11-10Published: 15 January 2013
Diagnosing serious infections in children is challenging, because of the low incidence of such infections and their non-specific presentation early in the course of illness. Prediction rules are promoted as a means to improve recognition of serious infections. A recent systematic review identified seven clinical prediction rules, of which only one had been prospectively validated, calling into question their appropriateness for clinical practice. We aimed to examine the diagnostic accuracy of these rules in multiple ambulatory care populations in Europe.
Four clinical prediction rules and two national guidelines, based on signs and symptoms, were validated retrospectively in seven individual patient datasets from primary care and emergency departments, comprising 11,023 children from the UK, the Netherlands, and Belgium. The accuracy of each rule was tested, with pre-test and post-test probabilities displayed using dumbbell plots, with serious infection settings stratified as low prevalence (LP; <5%), intermediate prevalence (IP; 5 to 20%), and high prevalence (HP; >20%) . In LP and IP settings, sensitivity should be >90% for effective ruling out infection.
In LP settings, a five-stage decision tree and a pneumonia rule had sensitivities of >90% (at a negative likelihood ratio (NLR) of < 0.2) for ruling out serious infections, whereas the sensitivities of a meningitis rule and the Yale Observation Scale (YOS) varied widely, between 33 and 100%. In IP settings, the five-stage decision tree, the pneumonia rule, and YOS had sensitivities between 22 and 88%, with NLR ranging from 0.3 to 0.8. In an HP setting, the five-stage decision tree provided a sensitivity of 23%. In LP or IP settings, the sensitivities of the National Institute for Clinical Excellence guideline for feverish illness and the Dutch College of General Practitioners alarm symptoms ranged from 81 to 100%.
None of the clinical prediction rules examined in this study provided perfect diagnostic accuracy. In LP or IP settings, prediction rules and evidence-based guidelines had high sensitivity, providing promising rule-out value for serious infections in these datasets, although all had a percentage of residual uncertainty. Additional clinical assessment or testing such as point-of-care laboratory tests may be needed to increase clinical certainty. None of the prediction rules identified seemed to be valuable for HP settings such as emergency departments.