Measuring agreement between decision support reminders: the cloud vs. the local expert
1 Department of BioHealth Informatics, Indiana University School of Informatics and Computing, Indiana University-Purdue University Indianapolis, 410 W. St., Suite 2000, Indianapolis, IN 46202, USA
2 Center for Biomedical Informatics, Regenstrief Institute, Inc, Indianapolis, IN USA
3 Center for Health Information and Communication, Department of Veterans Affairs, Veterans Health Administration, Health Services Research and Development Service CIN 13–416, Indianapolis, IN USA
4 Department of Internal Medicine, Indiana University School of Medicine, Indianapolis, IN USA
5 Department of Biostatistics, Indiana University, School of Medicine, Indianapolis, IN USA
6 Indiana University Cancer Center, Indianapolis, IN USA
7 Indiana Clinical Translational Science Institute, Indianapolis, IN USA
8 Harvard Medical School, Boston, MA USA
9 Division of General Medicine, Brigham and Women’s Hospital, Boston, MA USA
10 Partners HealthCare, Boston, MA USA
11 Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN USA
BMC Medical Informatics and Decision Making 2014, 14:31 doi:10.1186/1472-6947-14-31Published: 10 April 2014
A cloud-based clinical decision support system (CDSS) was implemented to remotely provide evidence-based guideline reminders in support of preventative health. Following implementation, we measured the agreement between preventive care reminders generated by an existing, local CDSS and the new, cloud-based CDSS operating on the same patient visit data.
Electronic health record data for the same set of patients seen in primary care were sent to both the cloud-based web service and local CDSS. The clinical reminders returned by both services were captured for analysis. Cohen’s Kappa coefficient was calculated to compare the two sets of reminders. Kappa statistics were further adjusted for prevalence and bias due to the potential effects of bias in the CDS logic and prevalence in the relative small sample of patients.
The cloud-based CDSS generated 965 clinical reminders for 405 patient visits over 3 months. The local CDSS returned 889 reminders for the same patient visit data. When adjusted for prevalence and bias, observed agreement varied by reminder from 0.33 (95% CI 0.24 – 0.42) to 0.99 (95% CI 0.97 – 1.00) and demonstrated almost perfect agreement for 7 of the 11 reminders.
Preventive care reminders delivered by two disparate CDS systems show substantial agreement. Subtle differences in rule logic and terminology mapping appear to account for much of the discordance. Cloud-based CDSS therefore show promise, opening the door for future development and implementation in support of health care providers with limited resources for knowledge management of complex logic and rules.