Email updates

Keep up to date with the latest news and content from BMC Medical Informatics and Decision Making and BioMed Central.

Open Access Research article

Evaluation of an automated safety surveillance system using risk adjusted sequential probability ratio testing

Michael E Matheny123*, Sharon-Lise T Normand4, Thomas P Gross5, Danica Marinac-Dabic5, Nilsa Loyo-Berrios5, Venkatesan D Vidi6, Sharon Donnelly6 and Frederic S Resnic6

Author Affiliations

1 GRECC and Center for Health Services Research, Tennessee Valley Healthcare System, Veterans Administration, Nashville, TN, USA

2 Division of General Internal Medicine and Public Health, Vanderbilt University Medical Center, Nashville, TN, USA

3 Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN, USA

4 Department of Health Care Policy, Harvard Medical School and Department of Biostatistics, Harvard School of Public Health, Boston, MA, USA

5 Food and Drug Administration, Center for Devices and Radiological Health, Silver Spring, MD, USA

6 Division of Cardiology, Brigham & Women's Hospital, Boston, MA, USA

For all author emails, please log on.

BMC Medical Informatics and Decision Making 2011, 11:75  doi:10.1186/1472-6947-11-75

Published: 14 December 2011

Abstract

Background

Automated adverse outcome surveillance tools and methods have potential utility in quality improvement and medical product surveillance activities. Their use for assessing hospital performance on the basis of patient outcomes has received little attention. We compared risk-adjusted sequential probability ratio testing (RA-SPRT) implemented in an automated tool to Massachusetts public reports of 30-day mortality after isolated coronary artery bypass graft surgery.

Methods

A total of 23,020 isolated adult coronary artery bypass surgery admissions performed in Massachusetts hospitals between January 1, 2002 and September 30, 2007 were retrospectively re-evaluated. The RA-SPRT method was implemented within an automated surveillance tool to identify hospital outliers in yearly increments. We used an overall type I error rate of 0.05, an overall type II error rate of 0.10, and a threshold that signaled if the odds of dying 30-days after surgery was at least twice than expected. Annual hospital outlier status, based on the state-reported classification, was considered the gold standard. An event was defined as at least one occurrence of a higher-than-expected hospital mortality rate during a given year.

Results

We examined a total of 83 hospital-year observations. The RA-SPRT method alerted 6 events among three hospitals for 30-day mortality compared with 5 events among two hospitals using the state public reports, yielding a sensitivity of 100% (5/5) and specificity of 98.8% (79/80).

Conclusions

The automated RA-SPRT method performed well, detecting all of the true institutional outliers with a small false positive alerting rate. Such a system could provide confidential automated notification to local institutions in advance of public reporting providing opportunities for earlier quality improvement interventions.