Open Access Research article

A new implicit review instrument for measuring quality of care delivered to pediatric patients in the emergency department

Madan Dharmar12*, James P Marcin12, Nathan Kuppermann123, Emily R Andrada13, Stacey Cole2, Danielle J Harvey4 and Patrick S Romano125

Author Affiliations

1 Department of Pediatrics, University of California, Davis, USA

2 Center for Health Services Research in Primary Care, University of California, Davis, USA

3 Department of Emergency Medicine, University of California, Davis, USA

4 Department of Public Health Sciences, University of California, Davis, USA

5 Division of General Internal Medicine; University of California, Davis, USA

For all author emails, please log on.

BMC Emergency Medicine 2007, 7:13  doi:10.1186/1471-227X-7-13

Published: 23 August 2007



There are few outcomes experienced by children receiving care in the Emergency Department (ED) that are amenable to measuring for the purposes of assessing of quality of care. The purpose of this study was to develop, test, and validate a new implicit review instrument that measures quality of care delivered to children in EDs.


We developed a 7-point structured implicit review instrument that encompasses four aspects of care, including the physician's initial data gathering, integration of information and development of appropriate diagnoses; initial treatment plan and orders; and plan for disposition and follow-up. Two pediatric emergency medicine physicians applied the 5-item instrument to children presenting in the highest triage category to four rural EDs, and we assessed the reliability of the average summary scores (possible range of 5–35) across the two reviewers using standard measures. We also validated the instrument by comparing this mean summary score between those with and without medication errors (ascertained independently by two pharmacists) using a two-sample t-test.


We reviewed the medical records of 178 pediatric patients for the study. The mean and median summary score for this cohort of patients were 27.4 and 28.5, respectively. Internal consistency was high (Cronbach's alpha of 0.92 and 0.89). All items showed a significant (p < 0.005) positive correlation between reviewers using the Spearman rank correlation (range 0.24 to 0.39). Exact agreement on individual items between reviewers ranged from 70.2% to 85.4%. The Intra-class Correlation Coefficient for the mean of the total summary score across the two reviewers was 0.65. The validity of the instrument was supported by the finding of a higher score for children without medication errors compared to those with medication errors which trended toward significance (mean score = 28.5 vs. 26.0, p = 0.076).


The instrument we developed to measure quality of care provided to children in the ED has high internal consistency, fair to good inter-rater reliability and inter-rater correlation, and high content validity. The validity of the instrument is supported by the fact that the instrument's average summary score was lower in the presence of medication errors, which trended towards statistical significance.