Skip to main content
  • Correspondence
  • Open access
  • Published:

A conceptual framework and protocol for defining clinical decision support objectives applicable to medical specialties

Abstract

Background

The U.S. Centers for Medicare and Medicaid Services established the Electronic Health Record (EHR) Incentive Program in 2009 to stimulate the adoption of EHRs. One component of the program requires eligible providers to implement clinical decision support (CDS) interventions that can improve performance on one or more quality measures pre-selected for each specialty. Because the unique decision-making challenges and existing HIT capabilities vary widely across specialties, the development of meaningful objectives for CDS within such programs must be supported by deliberative analysis.

Design

We developed a conceptual framework and protocol that combines evidence review with expert opinion to elicit clinically meaningful objectives for CDS directly from specialists. The framework links objectives for CDS to specialty-specific performance gaps while ensuring that a workable set of CDS opportunities are available to providers to address each performance gap. Performance gaps may include those with well-established quality measures but also priorities identified by specialists based on their clinical experience. Moreover, objectives are not constrained to performance gaps with existing CDS technologies, but rather may include those for which CDS tools might reasonably be expected to be developed in the near term, for example, by the beginning of Stage 3 of the EHR Incentive program. The protocol uses a modified Delphi expert panel process to elicit and prioritize CDS meaningful use objectives. Experts first rate the importance of performance gaps, beginning with a candidate list generated through an environmental scan and supplemented through nominations by panelists. For the highest priority performance gaps, panelists then rate the extent to which existing or future CDS interventions, characterized jointly as “CDS opportunities,” might impact each performance gap and the extent to which each CDS opportunity is compatible with specialists’ clinical workflows. The protocol was tested by expert panels representing four clinical specialties: oncology, orthopedic surgery, interventional cardiology, and pediatrics.

Peer Review reports

Introduction

Clinical decision support (CDS) is the process of providing persons involved in patient care with intelligently filtered and organized information, at appropriate times, to enable decisions that optimize health care and health outcomes.a The guidance and prompts that CDS can provide constitute one of the primary mechanisms by which electronic health records (EHR) can transform the quality and efficiency of health care delivery [1]. Various studies have demonstrated that CDS can influence clinical practice by helping clinicians to improve diagnosis [28], improve quality and patient safety [917], adhere to guidelines for prevention and treatment [1824], and avoid medication errors [2530].

However, the actual use of CDS within EHRs has been uneven [31, 32]. A systematic review of CDS related to medication prescribing cited poor integration of CDS into clinical workflows and limited relevance and timeliness of clinical messaging as two key implementation barriers [33]. The review also found that CDS interventions that were endorsed by colleagues, facilitated doctor-patient interactions, and minimized perceived threats to professional autonomy, were more readily adopted. Other experts have noted that the effectiveness of CDS interventions could be significantly enhanced through clearer displays of relevant information, more effective summaries of vast quantities of information relevant to specific decision making contexts, and by accounting for patients’ comorbidities in clinical recommendations [34]. Many such barriers must be addressed before CDS can truly transform clinical care.

To that end, the Health Information Technology for Economic and Clinical Health Act (HITECH), authorized the Centers for Medicare and Medicaid Services (CMS) to provide incentive payments to eligible providers who successfully demonstrate “meaningful use” of EHRs--including the use of CDS [35]. The initial rules proposed for the EHR Incentive Program included a requirement that providers implement 5 CDS artifacts, each targeted to address one of the specific clinical quality measures that had been designated as options for quality reporting. The final rule required only that providers attest to having implemented one “appropriate” clinical decision support rule that is relevant to the provider’s specialty. The change in approach between the proposed and final rule was responsive to commenters’ concerns that fewer than 5 quality measures would likely be reported by most eligible professionals, and that explicitly linking CDS requirements to clinical quality measures would “put constraints on providers and eliminate many types of CDS that could be beneficial.” [36] Instead, selection of the CDS rule to implement was left to providers, who could take into account their workflow, patient population, and quality improvement efforts. This change was recognized as an interim step taken in the absence of consensus standards for clinically specific CDS requirements. The Stage 2 proposed rule recently reinstated the 5 artifact requirement [37].

The clinical knowledge that should underlie CDS recommendations and the technology available to deliver the knowledge are both rapidly evolving, making it challenging to specify clinically precise meaningful use objectives for CDS. Moreover, performance gaps differ widely across specialties and clinical conditions, making the priorities for the optimal use of CDS potentially distinct within each specialty domain. No framework exists to systematically assess potential CDS objectives to ensure that 1) they address the most critical gaps in care, and 2) they are clinically meaningful to the broad range of specialties participating in the Medicare and Medicaid EHR Incentive Programs.

To begin laying the groundwork for clinically specific standards for high-priority CDS, the Office of the National Coordinator for Health IT (ONC) sought the development of a methodology that would allow experts to define consensus objectives for CDS that would be clinically meaningful within their specialty and that might later be transformed into specific meaningful use objectives for CDS for future stages of the EHR Incentive Program. This paper describes the development of a framework and protocol to elicit high-priority “CDS targets” from panels of specialists, comprising high priority clinical performance gaps within their specialty that are amenable to CDS. We begin by presenting our conceptual framework for specifying high-priority CDS targets. Next, we describe our methodology for identifying candidate clinical performance gaps and CDS opportunities, and the selection, design, and composition of panels for pilot testing the protocol. We then summarize the results of the pilot and the strengths and limitations of the protocol. A detailed report of the study’s methodology and results is available at: http://www.rand.org/pubs/technical_reports/TR1129.html.

Conceptual framework for specifying high-priority CDS targets

Our approach to defining potential CDS objectives began with a recognition that EHRs could potentially use many kinds of CDS features to target a specific clinical performance gap. For example, the overuse of antibiotics in upper respiratory infections might be targeted by an alert if an antibiotic is prescribed when documentation does not warrant it or by a smart form that guides documentation and therapy selection. In most cases, not only would there be insufficient evidence to determine which CDS intervention is most effective to codify as a specific CDS objective, but furthermore, CDS objectives should be flexible enough to accommodate the rapid pace of innovation in these technologies. On the other hand, meaningful use objectives need to include specific EHR features that can be assessed as present or absent in a given EHR and for which clinician usage can be measured. Given these constraints, we sought to specify CDS “targets,” rather than discrete CDS interventions, that might serve as the basis for future objectives.

We conceptualized a CDS target as a “clinical performance gap” that could be addressed by CDS. Further, we conceptualized high priority CDS targets as the most critical clinical performance gaps that have one or more CDS opportunities that can be implemented to address the performance gap that were both effective and compatible with clinical workflow (Figure 1). In the following sections we define clinical performance gaps and CDS opportunities and provide examples of each.

Figure 1
figure 1

Conceptual framework for defining high priority CDS targets.

Clinical Performance Gaps are clinical areas in which actual practice does not conform to optimal achievable practice. Performance gaps might include:

Failures to deliver care when indicated;

Inappropriate use of diagnostic tests or procedures;

Preventable adverse events;

Disparities or unwanted variations in care delivery; or

Deficiencies in patients’ experience of care (e.g., engagement in decision-making)

A clinical performance gap may or may not be associated with a formal quality measure. Figure 1 illustrates the relationship between quality measures and performance gaps. Although quality measures should always emerge from some recognition of a performance gap, they require many other considerations, including the strength of evidence supporting the desired action, the availability of data to measure each indicator, and the degree to which providers can be held accountable for their performance on the measure. By contrast, the evidence underlying clinical performance gaps may include clinical epidemiology or anecdotal observation in addition to empirical research. Many performance gaps in specialty care do not have associated quality measures because measure development and validation can be a laborious and lengthy process. Thus, as shown in Figure 1, there are many important performance gaps for which quality measures do not exist but which still represent opportunities for improving the quality of care.

CDS opportunities

A CDS opportunity is a description of a specific CDS intervention, including existing interventions or those that might be developed in the near term, that could be expected to address a clinical performance gap. CDS opportunities might include alerts, order sets, and documentation templates, among other types of interventions. Only a subset of CDS opportunities might be amenable to addressing particular performance gaps (Figure 1), due to either the effectiveness of current CDS technology or to the compatibility of those technologies with the unique aspects of workflow within the specialty.

The main task of the panel was to consider the importance of an initial set of performance gaps, and then to consider the strength of the CDS opportunities for the highest rated gaps. “High priority CDS targets” were those performance gaps that were both rated highly important and for which the CDS opportunities to close the gap were rated as having high potential impact and being highly compatible with clinical workflows. Table 1 provides a sample of clinical performance gaps from each of the four specialties and an associated CDS opportunity.

Table 1 Four illustrative CDS targets: clinical performance gaps and CDS opportunities

Methodology for identifying candidate clinical performance gaps and CDS opportunities

In preparation for each expert panel, study staff worked with the panel chair and co-chair to identify candidate clinical performance gaps and CDS opportunities from a wide range of sources.

Clinical performance gaps

We used three approaches to identify candidate clinical performance gaps for each panel. First, we scanned existing quality measure repositories and websites of quality measure producers, including the National Quality Measures Clearinghouse, National Quality Forum, Physician Quality Reporting System, National Committee for Quality Assurance, American Medical Association Physician Consortium for Performance Improvement, Hospital Inpatient Quality Reporting Program, Premier Hospital Quality Incentive Demonstration, and the quality measures selected for reporting in the Stage 1 proposed rule for the Medicare and Medicaid EHR Incentive Programs. Quality measures selected as relevant for each specialty were restated as declarative gap statements (Table 1). Second, we conducted an environmental scan to collect data, where available, on the prevalence of each performance gap and the clinical and economic outcomes of each gap—including both morbidity and mortality if appropriate—to provide panelists with a source of objective information about the relative importance of each gap prior to the ratings. Finally, we reviewed the preliminary list of performance gaps with the panel co-chairs, who recommended additions and/or revisions to the list. During the panel’s first meeting, we asked panelists to nominate additional performance gaps, which were included in the rating process. Throughout, we sought to represent gaps from each of the six priority domains of the National Priorities Partnership [38] — patient and family engagement, population health, safety, care coordination, palliative and end-of-life care, and elimination of overuse.

CDS opportunities

We then identified CDS opportunities for each of the clinical performance gaps using an approach depicted in Figure 2. First, for each performance gap, we identified one or more clinical actions that physicians and other health care professionals could take to address the clinical performance gap based on team members’ clinical experience. Second, we considered how specific CDS interventions could support providers in taking those clinical actions. These choices were influenced by two other factors: 1) the type of information that would be needed by (and available to) the CDS tool to be able to support the clinical action, and 2) consideration of the clinical workflow into which the tool might be inserted. For each specialty area, we conducted a scan of the published literature for specific CDS interventions that had been used within the specialty domain, and abstracted data on their effectiveness and their impact on workflow. We were unable to collect data on CDS tools in development or those for which evaluations were not published in the peer-reviewed literature.

Figure 2
figure 2

Approach for specifying CDS opportunities for clinical performance gaps.

To supplement the tools described in the literature, we worked with the panel co-chairs to identify additional CDS interventions. We sought to identify both existing tools and CDS concepts—assuming that there would be adequate lead time to allow vendors to develop these tools before later stages of meaningful use objectives were released. Although we did not provide panelists with an opportunity to add candidate CDS opportunities because of time constraints, such an approach would be desirable.

We also sought to identify CDS opportunities that covered a broad range of CDS categories. We used a taxonomy developed by Osheroff and colleagues [39] that included six categories of CDS interventions (Table 2).

Table 2 CDS intervention categories with examples

In Table 3, we provide examples of the workflow elements that guided the development of CDS opportunities. No comprehensive workflow framework was available to draw on to select optimal insertion points in workflows. Moreover, different clinicians or practice organizations may insert different forms of CDS into different workflows, and so a one-size-fits all approach may not be appropriate. The framework we developed for the purposes of this project decomposes workflow into specific tasks, the actors or persons who take action, and the settings in which the task might occur.

Table 3 Examples of workflow elements

Expert panel protocol

Meeting format and rating process

We used a teleconference meeting format with webinar, hosting three 90-minute teleconferences with each panel (Figure 3). Across these three meetings the panelists completed two modified Delphi rating processes, one focused on rating the importance of each performance gap, and the second focused on rating, for each important performance gap, the compatibility of CDS with clinical workflow and the potential impact for CDS to close the performance gap. Each rating process began with an initial round of ratings that the panelists conducted independently and confidentially. Panelist ratings were then compiled for review and discussion on a panel teleconference, and the discussion was then followed immediately by a second round of ratings, which the panelists were asked to complete before leaving the call. Panelists submitted their ratings electronically to facilitate data collection, ensure completeness of data, and to expedite the analysis.

Figure 3
figure 3

Expert panel protocol.

Panelists received a summary of the first round of ratings prior to attending the second and third calls to allow them to review their ratings relative to those of other panelists. Table 4 shows a one page example of a report from the first round of ratings of the oncology CDS opportunities. Panelists could see the distribution of initial ratings by looking at the numbers above the 1 to 9 rating line, which shows counts of the number of panelists who selected each value. For example, 2 panelists assigned a rating of “6” for the compatibility of the first CDS opportunity for addressing Gap #3 (a smart form that captures pain intensity). Each panelist received a different printout; the distribution of ratings was the same on all reports, but the caret (^) below the rating line showed the initial rating assigned by an individual panelist.

Table 4 Sample panelist rating report depicting the distribution of panelists’ ratings and the panelist’s own rating

During the second and third teleconferences, the discussion of first round gap and CDS opportunity ratings was led by a clinician who was involved in developing the initial list of gap statements and associated CDS opportunities. The discussion during the second and third teleconferences focused on those items that, after analysis of the first round of ratings, were associated with “indeterminate” levels of agreement based on the dispersion in panelists’ ratings (See Analysis of Ratings, below). After discussion, the panelists were then asked to independently and confidentially re-rate all of the gaps (during teleconference #2) and gap-CDS opportunity pairs both individually and as a set of opportunities for a given gap statement (during teleconference #3).

Rating criteria

We developed three criteria for panelists to use to rate individual performance gaps and CDS opportunities (Tables 5 and 6). In defining the importance criterion, we looked to the National Priorities Partnership framework to identify potential dimensions of importance, from which we selected three: 1) population health (i.e., prevalence, health impact on individuals), 2) patient engagement, and 3) efficiency. We also included an additional dimension, the extent to which evidence supports specific scientific actions to address each performance gap. In assessing compatibility, panelists were instructed to consider a range of practice workflows into which the CDS intervention might be inserted including their own workflows, those of ancillary staff, and those that might be common in other practice settings. This criterion was considered important because the timing with which CDS is introduced in a workflow and its level of intrusiveness could impact the overall utility of the tool. In assessing the potential impact of CDS on the performance gap, panelists were asked to imagine how CDS tools would promote actions to address each gap. Panelists used a nine-point scale with the following anchors: for performance gap ratings (1 = Not at all important; 5 = Equivocal; 9 = Extremely important)) and for CDS opportunities (1 = Not at all compatible/No potential impact; 5 = Equivocal; 9 = Extremely compatible/Extremely high potential impact).

Table 5 Rating criteria used to elicit CDS priority performance gap-CDS opportunities: clinical performance gaps
Table 6 Rating criteria used to elicit CDS priority performance gap-CDS opportunities: CDS opportunities

Analysis of ratings

Following conventional methods for analyzing data from a modified Delphi process [41], we computed three sets of estimates for each performance gap and CDS opportunity. First, we calculated median ratings to measure the central tendency for the set of panelists’ ratings. We then estimated mean absolute deviations from the median to measure the dispersion of the ratings. Third, we classified ratings as exhibiting agreement, disagreement, or indeterminate levels of agreement, using nonparametric decision rules that take into account the distribution of panelists’ scores. “Agreement” is achieved when panelists’ ratings converge tightly around the median rating, while “disagreement” reflects a polarization of opinion that occurs when a large number of panelists provide ratings in both extremes of the rating scale. Because typical rules for measuring agreement are based on panels of size 9 and our panels ranged in size from a low of 12 to a high of 17, we followed the generalized scoring method identified in the RAND/UCLA Appropriateness Method manual [41] for measuring disagreement and agreement with larger panels, as shown in Table 7.

Table 7 Definitions of agreement and disagreement for different panel sizes

According to the RAND/UCLA Appropriateness Method, classification of results depends only on median ratings and the presence or absence of disagreement. Performance gaps with median ratings in the top third of the 9-point scale without disagreement are classified as important, those with median ratings in the bottom third without disagreement are classified as unimportant, and those with intermediate median ratings or any median with disagreement are equivocal (Table 8). However, we used more restrictive criteria to identify high priority clinical performance gaps, by requiring that each gap have both a median rating between 7 and 9 and exhibit statistical agreement. Items with indeterminate levels of agreement were not considered high priority. A similar procedure was used in rating the CDS opportunities for the high priority gaps.

Table 8 Classification of performance gaps and CDS opportunities based on median ratings and statistical agreement, by rating criterion

Items that were classified as “equivocal” were not discussed after the first round of ratings. After the second round of rating the performance gaps, we moved forward the 8 highest rated gaps (+/- 4 gaps) that achieved “agreement” for having the panels consider CDS opportunities. This cut point was set to allow adequate time to discuss the one or more CDS opportunities for each performance gap in the third call, given the 90 minute phone call constraint.

From the two sets of ratings provided by experts on each panel, we compiled a list of high priority targets for CDS. Performance gaps that were highly rated on importance and for which the set of CDS tools or concepts was rated as being compatible with clinical practice and had a high potential impact on addressing the performance gap were designated high priority CDS targets.

Selection of specialty-specific vs. condition-specific scope for expert panels

Constructing a panel first involves selecting the clinical domain that the panel will be asked to address. Conceptually, the focus of CDS prioritization could be on the breadth of problems within a single specialty or specialty subdomain (e.g., gastroenterology, including peptic ulcer disease, gastrointestinal malignancies, etc.) or it could focus on a condition from the perspective of the multiple specialties that treat the condition (e.g., viral hepatitis, including hepatologists, primary care physicians, infectious disease specialists, and transplant surgeons). Specialty-specific panels might be more likely to generate CDS priorities that are considered highly relevant within the specialty, however, this approach might reinforce the “silo” nature of medicine by failing to include the full scope of care provided for the conditions in question. A condition-specific approach, by contrast, provides an opportunity to bring together perspectives from a broad set of specialists to prioritize CDS that would best improve care for the condition, regardless of specialty. However, this approach would likely cover fewer conditions than the specialty-specific approach and it might have a higher risk of producing CDS priorities that miss the mark for specialties that are not strongly represented on the panel. Given these tradeoffs, we sought to include each kind of panel in pilot testing.

Selection of expert panel members

Selection of panelists began with the recruitment of two co-chairs for each panel, one of whom was selected based on nationally-recognized leadership in the clinical domain and the other based on having expertise in CDS within the specialty. Co-chairs were selected in consultation with relevant specialty societies (such as the American College of Cardiology) and with staff of the AMA-PCPI, which is working with a broad range of specialties to set quality improvement objectives. After recruiting the panel co-chairs, we first consulted with them to refine the panel’s scope, setting bounds that would enable covering high-priority areas within the time available for the panel’s work. We then selected and recruited individual panelists based on their clinical expertise, community influence (i.e., in professional organizations for their specialty and serving on advisory panels related to quality of care, practice improvement, and/or use of health IT), and the diversity of settings in which they practice (to reflect both academic and community practice). For condition-specific panels, members were also selected to represent a balance of the specialties involved in caring for the conditions in question. For specialty-specific panels we also sought to include a relevant range of clinical sub-domains within the specialty.

The panel size was not fixed across panels. We selected approximately 14-17 members per panel to ensure that we would have a minimum of 9 panelists to complete the Delphi rating process after allowing for attrition.

Scope and membership of panels for pilot testing the protocol

We selected four panels with different characteristics to maximize the amount of information learned during the pilot. To test variations on the definition of a panel’s area of clinical focus, we selected 1) one medical specialty (oncology), 2) one surgical specialty (orthopedic surgery), 3) one non-surgical procedural specialty (interventional cardiology), and 4) one primary care specialty (pediatrics). Key factors considered in selecting the focus, content and membership of the panels were as follows:

Ensuring diversity of clinical workflows

The four panels were deliberately selected to represent a diverse set of workflows with known performance gaps potentially amenable to CDS. These workflows included: 1) Managing transitions between inpatient and ambulatory care settings (orthopedic surgery, interventional cardiology, oncology); 2) Care coordination with other specialists (oncology, interventional cardiology, pediatrics), 3) Care coordination during emergencies (interventional cardiology), 4) Selecting and implementing treatment protocols when the evidence base may be rapidly evolving (oncology); 5) Care provided by non-physician staff with specialized training (orthopedic surgery); 6) Workflows specific to different phases of illness (oncology); and 7) Long-term follow up and management (oncology, pediatrics) potentially facilitated by the use of registries (interventional cardiology, oncology).

Variation in the use of EHRs and CDS

We also selected specialties that were known to be relatively advanced users of CDS (oncology) as well as those that were not known for having a high level of CDS development or EHR adoption (orthopedic surgery).

Additional boundaries on clinical scope within specialty

Because the project was charged with developing and pilot testing a priority-setting protocol within a limited timeframe, we limited the clinical scope of each panel by selecting important specialty sub-domains, clinical conditions, or both, in consultation with the panel co-chairs. The considerations for each panel were as follows.

Oncology

The oncology panel focused on medical oncology in recognition of the fact that medical oncologists are responsible for the largest share of all health expenditures for oncology. Also, given the large number of different cancers that could be addressed, we limited the focus to two of the most prevalent cancers, breast and colorectal cancer. While radiation oncologists and surgical oncologists have very different workflows that may define different CDS opportunities, we included two radiation oncologists and two surgeons on the panel to address the fact that medical oncologists commonly coordinate patient care with these other specialists. Thus, the oncology panel used a more condition-specific approach and it included input from multiple related specialties, but it still focused on care processes delivered by oncologists, such as the planning and administration of chemotherapy.

Orthopedics

The scope of the orthopedics panel was confined to total hip and total knee replacement surgery, two of the most common procedures within the specialty. Many workflows and performance gaps associated with total joint replacement were also thought to be representative of those characteristic of other types of orthopedic surgery. We included a small number of spine and hand surgeons as well as a number of general orthopedic surgeons to understand areas where performance gaps and CDS opportunities might be common across these other areas.

Pediatrics

While our pediatrics panel consisted entirely of pediatricians, a small number of panelists had expertise in selected pediatric clinical areas, such as allergy and behavioral health. The scope of the panel was restricted to pediatric conditions that were mostly treated in primary care settings.

Interventional cardiology

For the interventional cardiology panel, we focused on percutaneous coronary intervention (PCI) both in the management of Acute Coronary Syndrome (ACS) as well as stable Coronary Artery Disease (CAD). Primary care-related performance gaps, such as the management of cholesterol levels, were not included. However, to make the panel more condition-specific rather than strictly specialty-focused, we included internists, interventional and non-interventional cardiologists, and electrophysiologists who had expertise related to the role of PCI and its coordination in managing ACS or stable CAD.

Members for each panel were selected based on the criteria noted above. We started by recruiting members who were known contributors to existing AMA-PCPI performance measurement panels—including those relating to breast cancer, colorectal cancer, and PCI. Many of the individuals had been nominated by their specialty organizations for developing performance measures through the AMA-PCPI. We then added clinical experts who were identified based on outreach to specialty organizations, use of key informants, and personal knowledge of experts by the project team. Because orthopedic surgeons were not heavily represented on any existing AMA-PCPI panels, we requested assistance from the American Academy of Orthopedic Surgeons (AAOS) and the North American Spine Society (NASS) to identify suitable experts, and particularly those individuals who had expertise with EHRs or clinical decision support. AAOS engaged in an open call to their membership while NASS recommended specific candidates.

Results

The protocol was successfully implemented within each of the four specialty panels, and each produced lists of high priority targets amenable to CDS (Table 9). Each panel considered an initial set of 22 to 28 performance gaps, including numerous gaps nominated by individual panelists. Following the first stage of ratings, 6 to 15 gaps were classified as high priority across the four panels, with some panels endorsing gaps much more selectively than other panels. For example, orthopedic surgery and pediatrics panelists endorsed fewer than 21 percent and 39 percent of gaps, respectively, as highly important. Notably, less than half of the 43 clinical performance gaps that were rated “high priority” were based on quality measures, suggesting that clinical observation is an important source of data for determining priorities for CDS.

Table 9 Number of CDS targets rated high priority, by panel

Only the oncology panel found an abundance of highly effective and workflow-compatible CDS opportunities to address the high priority performance gaps. Fourteen of fifteen gaps were determined to be amenable to CDS compared to only 27, 36, and 50 percent of gaps for pediatrics, interventional cardiology, and orthopedic surgery, respectively. Nevertheless, all panels achieved consensus on at least 3 high priority targets for CDS. The complete set of rating results is available elsewhere.

All ratings were completed in a short timeframe with limited attrition. Preparation for and recruitment of the four panels was completed in 6 months and the panel protocol was implemented for all panels in only three months. Attrition by panelists was low despite a requirement for panelists to participate in all three teleconferences. Only two to three experts per panel withdrew during the course of the study. Our staged approach and use of webmeetings, were designed specifically to minimize participant burden and to enable rapid completion of the rating tasks and appeared to play a key role in the success of the pilot.

The need to implement the protocol in a short timeframe significantly limited the scope of our study. First, we were unable to conduct an extensive test of alternative frameworks and protocols for eliciting CDS targets. For example, we might have explored the use of a protocol that elicited very specific decision rules (rather than the broader construct of “targets”). We also might have incorporated specific workflows within which CDS might be optimally deployed in the rating tasks. These alternative frameworks may be worthy of additional study. Because of time constraints, we also did not allow panelists to nominate CDS opportunities beyond those developed by staff in conjunction with panel co-chairs. While a more comprehensive elicitation of available CDS interventions for each target might influence ratings, panelists were directed to rate the overall effectiveness and compatibility of the set of interventions for each performance gap as well as any other interventions that were not presented or discussed.

Rating the effectiveness and compatibility of CDS interventions appeared to be the most complex task for panelists. Many existing CDS interventions were described in the literature with varying levels of detail, and many CDS interventions rated by panelists were only concepts and were therefore difficult to fully specify. Developing standard templates for CDS interventions for use in future implementations of the protocol would help panelists have a common set of information on existing tools or tool concepts to facilitate the rating task. In addition, outreach to the vendor community to identify existing tools or tools in development might provide a more comprehensive set of CDS interventions to inform panelists’ ratings. Each of these preparatory activities highlights the importance of having an adequate number of skilled staff to conduct literature reviews and compile preliminary sets of rating items. The validity of the final set of high priority CDS targets could depend on the extent to which these initial activities are effective in identifying a comprehensive set of performance gaps and CDS interventions.

Conclusion

The extent to which CMS’s EHR Incentive Programs will stimulate HIT-enabled quality improvement will depend to a large extent on the way in which meaningful use objectives are specified. We developed a conceptual framework and protocol for eliciting high priority CDS targets that are clinically important and that are amenable to CDS. These targets were elicited directly from specialists and reflect consensus recommendations following rating exercises and group discussions. While the targets are specific, they allow for a broad range of CDS interventions that can be used to address each performance gap. Such an approach recognizes specialists’ own experience and preferences for CDS while maintaining strong incentives for vendor innovation. CDS targets could be used to specify meaningful use objectives for the CMS EHR Incentive Programs or could play a role in other pay-for-performance programs.

Endnotes

aThis definition is a hybrid of the CDS definitions from the Health Information Management Systems Society (HIMSS) [39] and from CMS [36].

References

  1. Institute of Medicine (U.S.). Committee on Quality of Health Care in America: Crossing the quality chasm: a new health system for the 21st century. 2001, Washington: National Academy Press

    Google Scholar 

  2. Gorry GA, Barnett GO: Sequential diagnosis by computer. Jama. 1968, 205 (12): 849-854. 10.1001/jama.1968.03140380053012.

    Article  CAS  PubMed  Google Scholar 

  3. Shortliffe EH, Davis R, Axline SG, Buchanan BG, Green CC, Cohen SN: Computer-based consultations in clinical therapeutics: explanation and rule acquisition capabilities of the MYCIN system. Comput Biomed Res. 1975, 8 (4): 303-320. 10.1016/0010-4809(75)90009-9.

    Article  CAS  PubMed  Google Scholar 

  4. Berner ES, Maisiak RS, Cobbs CG, Taunton OD: Effects of a decision support system on physicians' diagnostic performance. J Am Med Inform Assoc. 1999, 6 (5): 420-427. 10.1136/jamia.1999.0060420.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  5. Friedman CP, Elstein AS, Wolf FM, Murphy GC, Franz TM, Heckerling PS, Fine PL, Miller TM, Abraham V: Enhancement of clinicians' diagnostic reasoning by computer-based consultation: a multisite study of 2 systems. JAMA. 1999, 282 (19): 1851-1856. 10.1001/jama.282.19.1851.

    Article  CAS  PubMed  Google Scholar 

  6. Samore MH, Bateman K, Alder SC, Hannah E, Donnelly S, Stoddard GJ, Haddadin B, Rubin MA, Williamson J, Stults B: Clinical decision support and appropriateness of antimicrobial prescribing: a randomized trial. JAMA. 2005, 294 (18): 2305-2314. 10.1001/jama.294.18.2305.

    Article  CAS  PubMed  Google Scholar 

  7. Graber ML, Mathew A: Performance of a web-based clinical diagnosis support system for internists. J Gen Intern Med. 2008, 23 (Suppl 1): 37-40.

    Article  PubMed  Google Scholar 

  8. Elkin PL, Liebow M, Bauer BA, Chaliki S, Wahner-Roedler D, Bundrick J, Lee M, Brown SH, Froehling D, Bailey K: The introduction of a diagnostic decision support system (DXplain) into the workflow of a teaching hospital service can decrease the cost of service for diagnostically challenging Diagnostic Related Groups (DRGs). Int J Med Inform. 2010, 79 (11): 772-777. 10.1016/j.ijmedinf.2010.09.004.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Institute of Medicine (U.S.). Committee on Crossing the Quality Chasm: Adaptation to Mental Health and Addictive Disorders: Improving the quality of health care for mental and substance-use conditions. 2006, Washington: National Academies Press

    Google Scholar 

  10. Hunt DL, Haynes RB, Hanna SE, Smith K: Effects of computer-based clinical decision support systems on physician performance and patient outcomes: a systematic review. Jama. 1998, 280 (15): 1339-1346. 10.1001/jama.280.15.1339.

    Article  CAS  PubMed  Google Scholar 

  11. Bates DW, Pappius E, Kuperman GJ, Sittig D, Burstin H, Fairchild D, Brennan TA, Teich JM: Using information systems to measure and improve quality. Int J Med Inform. 1999, 53 (2–3): 115-124.

    Article  CAS  PubMed  Google Scholar 

  12. Kuperman GJ, Teich JM, Gandhi TK, Bates DW: Patient safety and computerized medication ordering at Brigham and Women's Hospital. Jt Comm J Qual Improv. 2001, 27 (10): 509-521.

    CAS  PubMed  Google Scholar 

  13. Garg AX, Adhikari NK, McDonald H, Rosas-Arellano MP, Devereaux PJ, Beyene J, Sam J, Haynes RB: Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. Jama. 2005, 293 (10): 1223-1238. 10.1001/jama.293.10.1223.

    Article  CAS  PubMed  Google Scholar 

  14. Kawamoto K, Houlihan CA, Balas EA, Lobach DF: Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMJ. 2005, 330 (7494): 765-10.1136/bmj.38398.500764.8F.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Schedlbauer A, Prasad V, Mulvaney C, Phansalkar S, Stanton W, Bates DW, Avery AJ: What evidence supports the use of computerized alerts and prompts to improve clinicians' prescribing behavior?. J Am Med Inform Assoc. 2009, 16 (4): 531-538. 10.1197/jamia.M2910.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Amarasingham R, Plantinga L, Diener-West M, Gaskin DJ, Powe NR: Clinical information technologies and inpatient outcomes: a multiple hospital study. Arch Intern Med. 2009, 169 (2): 108-114. 10.1001/archinternmed.2008.520.

    Article  PubMed  Google Scholar 

  17. Jaspers MW, Smeulers M, Vermeulen H, Peute LW: Effects of clinical decision-support systems on practitioner performance and patient outcomes: a synthesis of high-quality systematic review findings. J Am Med Inform Assoc. 2011, 18 (3): 327-334. 10.1136/amiajnl-2011-000094.

    Article  PubMed  PubMed Central  Google Scholar 

  18. McDonald CJ, Overhage JM: Guidelines you can follow and can trust. An ideal and an example. Jama. 1994, 271 (11): 872-873. 10.1001/jama.1994.03510350082042.

    Article  CAS  PubMed  Google Scholar 

  19. Overhage JM, Tierney WM, Zhou XH, McDonald CJ: A randomized trial of "corollary orders" to prevent errors of omission. J Am Med Inform Assoc. 1997, 4 (5): 364-375. 10.1136/jamia.1997.0040364.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  20. Maviglia SM, Zielstorff RD, Paterno M, Teich JM, Bates DW, Kuperman GJ: Automating complex guidelines for chronic disease: lessons learned. J Am Med Inform Assoc. 2003, 10 (2): 154-165. 10.1197/jamia.M1181.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Sintchenko V, Coiera E, Iredell JR, Gilbert GL: Comparative impact of guidelines, clinical data, and decision support on prescribing decisions: an interactive web experiment with simulated cases. J Am Med Inform Assoc. 2004, 11 (1): 71-77.

    Article  PubMed  PubMed Central  Google Scholar 

  22. Eslami S, Abu-Hanna A, de Keizer NF: Evaluation of outpatient computerized physician medication order entry systems: a systematic review. J Am Med Inform Assoc. 2007, 14 (4): 400-406. 10.1197/jamia.M2238.

    Article  PubMed  PubMed Central  Google Scholar 

  23. Pearson SA, Moxey A, Robertson J, Hains I, Williamson M, Reeve J, Newby D: Do computerised clinical decision support systems for prescribing change practice? A systematic review of the literature (1990-2007). BMC Health Serv Res. 2009, 9: 154-10.1186/1472-6963-9-154.

    Article  PubMed  PubMed Central  Google Scholar 

  24. Shojania KG, Jennings A, Mayhew A, Ramsay CR, Eccles MP, Grimshaw J: The effects of on-screen, point of care computer reminders on processes and outcomes of care. Cochrane Database Syst Rev. 2009, CD001096-3

  25. Bates DW, Teich JM, Lee J, Seger D, Kuperman GJ, Ma’Luf N, Boyle D, Leape L: The impact of computerized physician order entry on medication error prevention. J Am Med Inform Assoc. 1999, 6 (4): 313-321. 10.1136/jamia.1999.00660313.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  26. Teich JM, Merchia PR, Schmiz JL, Kuperman GJ, Spurr CD, Bates DW: Effects of computerized physician order entry on prescribing practices. Arch Intern Med. 2000, 160 (18): 2741-2747. 10.1001/archinte.160.18.2741.

    Article  CAS  PubMed  Google Scholar 

  27. Bates DW, Cohen M, Leape LL, Overhage JM, Shabot MM, Sheridan T: Reducing the frequency of errors in medicine using information technology. J Am Med Inform Assoc. 2001, 8 (4): 299-308. 10.1136/jamia.2001.0080299.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  28. Kaushal R, Shojania KG, Bates DW: Effects of computerized physician order entry and clinical decision support systems on medication safety: a systematic review. Arch Intern Med. 2003, 163 (12): 1409-1416. 10.1001/archinte.163.12.1409.

    Article  PubMed  Google Scholar 

  29. Kuperman GJ, Bobb A, Payne TH, Avery AJ, Gandhi TK, Burns G, Classen DC, Bates DW: Medication-related clinical decision support in computerized provider order entry systems: a review. J Am Med Inform Assoc. 2007, 14 (1): 29-40.

    Article  PubMed  PubMed Central  Google Scholar 

  30. Kaushal R, Kern LM, Barron Y, Quaresimo J, Abramson EL: Electronic prescribing improves medication safety in community-based office practices. J Gen Intern Med. 2010, 25 (6): 530-536. 10.1007/s11606-009-1238-8.

    Article  PubMed  PubMed Central  Google Scholar 

  31. Saleem JJ, Patterson ES, Militello L, Render ML, Orshansky G, Asch SM: Exploring barriers and facilitators to the use of computerized clinical reminders. J Am Med Inform Assoc. 2005, 12 (4): 438-447. 10.1197/jamia.M1777.

    Article  PubMed  PubMed Central  Google Scholar 

  32. Miller RH, Sim I: Physicians' use of electronic medical records: barriers and solutions. Health Aff (Millwood). 2004, 23 (2): 116-126. 10.1377/hlthaff.23.2.116.

    Article  Google Scholar 

  33. Moxey A, Robertson J, Newby D, Hains I, Williamson M, Pearson SA: Computerized clinical decision support for prescribing: provision does not guarantee uptake. J Am Med Inform Assoc. 2010, 17 (1): 25-33. 10.1197/jamia.M3170.

    Article  PubMed  PubMed Central  Google Scholar 

  34. Sittig DF, Wright A, Osheroff JA, Middleton B, Teich JM, Ash JS, Campbell E, Bates DW: Grand challenges in clinical decision support. J Biomed Inform. 2008, 41 (2): 387-392. 10.1016/j.jbi.2007.09.003.

    Article  PubMed  Google Scholar 

  35. Blumenthal D, Tavenner M: The "meaningful use" regulation for electronic health records. N Engl J Med. 2010, 363 (6): 501-504. 10.1056/NEJMp1006114.

    Article  CAS  PubMed  Google Scholar 

  36. Centers for Medicare and Medicaid Services: Medicare and Medicaid Programs; Electronic Health Record Incentive Program. 2010, 75: 44314-44588. FR 44314: Federal Register.

    Google Scholar 

  37. Centers Medicare and Medicaid Services: Medicare and Medicaid Programs Electronic Health Record Incentive Program–Stage 2. 2012, 77: 13698-13829. FR 45: Federal Register.

    Google Scholar 

  38. National Priorities Partnership [website].http://www.nationalprioritiespartnership.org/,

  39. Osheroff JA, Pifer EA, Teich JM, Sittig DF, Jenders RA: Improving Outcomes with Clinical Decision Support: An Implementer's Guide. 2005, Chicago: HIMSS

    Google Scholar 

  40. Schnipper JL, Linder JA, Palchuk MB, Einbinder JS, Li Q, Postilnik A, Middleton B: "Smart Forms" in an Electronic Medical Record: documentation-based clinical decision support to improve disease management. J Am Med Inform Assoc. 2008, 15 (4): 513-523. 10.1197/jamia.M2501.

    Article  PubMed  PubMed Central  Google Scholar 

  41. Marâia D, Bernstein S, Burnand B, Fitch K, Kahan JP, LaCalle JR, Lazaro P, Loo M, McDonnell J, Vader J: The RAND/UCLA appropriateness method user's manual. 2001, Santa Monica, CA: RAND Corporation

    Google Scholar 

Pre-publication history

Download references

Acknowledgments

This study was funded by the U.S. Office of the National Coordinator (ONC) for Health Information Technology, through contract HHSP23320095649WC, task order HHSP23337009T. The authors thank Liisa Hiatt for expert management of the project and Jonathan Teich, MD, Ph.D. for his contributions to our thinking about this project and for his critical review of the manuscript. The funder conceived the study, and provided feedback on the development of the protocol, but did not contribute to the drafting of the manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Justin W Timbie.

Additional information

Authors’ contributions

DB, JT, CD, and ES developed the conceptual framework and protocol, JT and DB drafted the manuscript. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Authors’ original file for figure 3

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Timbie, J.W., Damberg, C.L., Schneider, E.C. et al. A conceptual framework and protocol for defining clinical decision support objectives applicable to medical specialties. BMC Med Inform Decis Mak 12, 93 (2012). https://doi.org/10.1186/1472-6947-12-93

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1472-6947-12-93

Keywords