Skip to main content

Conducting Online Expert panels: a feasibility and experimental replicability study

Abstract

Background

This paper has two goals. First, we explore the feasibility of conducting online expert panels to facilitate consensus finding among a large number of geographically distributed stakeholders. Second, we test the replicability of panel findings across four panels of different size.

Method

We engaged 119 panelists in an iterative process to identify definitional features of Continuous Quality Improvement (CQI). We conducted four parallel online panels of different size through three one-week phases by using the RAND's ExpertLens process. In Phase I, participants rated potentially definitional CQI features. In Phase II, they discussed rating results online, using asynchronous, anonymous discussion boards. In Phase III, panelists re-rated Phase I features and reported on their experiences as participants.

Results

66% of invited experts participated in all three phases. 62% of Phase I participants contributed to Phase II discussions and 87% of them completed Phase III. Panel disagreement, measured by the mean absolute deviation from the median (MAD-M), decreased after group feedback and discussion in 36 out of 43 judgments about CQI features. Agreement between the four panels after Phase III was fair (four-way kappa = 0.36); they agreed on the status of five out of eleven CQI features. Results of the post-completion survey suggest that participants were generally satisfied with the online process. Compared to participants in smaller panels, those in larger panels were more likely to agree that they had debated each others' view points.

Conclusion

It is feasible to conduct online expert panels intended to facilitate consensus finding among geographically distributed participants. The online approach may be practical for engaging large and diverse groups of stakeholders around a range of health services research topics and can help conduct multiple parallel panels to test for the reproducibility of panel conclusions.

Peer Review reports

Background

Expert panels are an established consensus-finding method in clinical and health services research [1, 2]. They often use a modified Delphi structure [3], which typically consists of two question-driven phases and one discussion phase. If conducted properly, expert panels are an invaluable tool for defining agreement on controversial subjects [4, 5]. Nonetheless, panels are expensive and laborious to conduct: It is necessary to identify representative sets of experts, coordinate experts' schedules, arrange meetings, distribute panel questions in advance, and recruit a skilled facilitator to lead discussions either in person or over the phone [6, 7]. Panel size is also limited to ensure effective in-person discussion. These limitations are particularly relevant to arranging panels that are inclusive enough to reflect the diversity of opinion in a broad field, such as Quality Improvement (QI).

Delphi panels can be also conducted online to facilitate the process of obtaining input from participants [8, 9]. Potential advantages may include the efficient use of experts' time [9]; the ability to engage more diverse and representative panelists that may include experts from other countries [8]; the absence of expenses for postage and travel [9]; the ability to make online discussions anonymous and thus reduce possible biases based on participant status or personality [10–12]; and the benefit of contributing to the elicitation process at the time convenient to panelists [9]. Potential disadvantages, however, may include lower levels of engagement and interaction among participants, caused by their relative unfamiliarity with online tools in general and a possibility of technical difficulties accessing or using an online system, which may undermine panelists' willingness to participate and affect the quality of deliberations and outputs [13].

While potentially useful, online expert panels with a discussion board functionality are a relatively new phenomenon. Previous research also identified a number of concerns about the quality of online interaction [14], including variable participation rates, information overload, and difficulties in following discussion threads [15, 16]. The best panel size for online discussion is also unknown. Very large panels, for example, might cause coordination problems [12] or impede effective interaction. Very small panels, in turn, may not benefit from fruitful discussions because participants may not feel obliged to contribute to anonymous discussions [17]. In addition, we know that in-person panels given the same information may come up with different conclusions [18, 19], yet we do not know the magnitude of this effect for online panels.

To evaluate both the quality and usefulness of online expert panels, it is necessary to compare them to traditional face-to-face panels. Nonetheless, before a randomized controlled trial can be conducted, a feasibility and replicability study of using online panels should be performed first. Therefore, in this article, we evaluate the feasibility of conducting online expert panels for engaging a large, diverse group of stakeholders and discuss the replicability of findings across panels of different size.

To do so, we conducted four concurrent online expert panels of various sizes that evaluated the key definitional features of the term "Continuous Quality Improvement" (CQI) and assessed panelist participation across all panel phases. We then tested levels of agreement within and between panels. We also analyzed panelists' satisfaction with the online process and specifically assessed whether it differed between panelists representing different stakeholder groups. Finally, we explored the effects of panel size on participation rates, agreement, and participants' satisfaction.

An online approach can be considered feasible if panel participation is relatively high (e.g., above a typically expected 45-50% participation rate [20]), panelists achieve consensus, and participants are generally satisfied with the process. Panel results can have an acceptable level of replicability if the level of inter-panel agreement is fair (kappa coefficient is in the .2-.4 range) or above. A finding that the online panel approach was feasible would show that the method has promise not only for advancing appropriate terminology use in QI, but also for facilitating decision-making in other fields of health services research. Moreover, it would also indicate that a study comparing the results of a face-to-face and an online Delphi-like panel should be conducted.

Method

To explore the feasibility of an online approach and to evaluate the replicability of panel findings, we convened and asked 4 online panels to define the appropriate use of the term "Continuous Quality Improvement"1. The QI field is rapidly developing [21]. Healthcare organizations are increasingly investing in QI approaches, and funders and journals support a growing level of QI research. Major communication challenges have arisen, however, due to lack of consensus around QI terminology use [22]. For example, two studies may both report the use of "CQI" but define or operationalize it so differently that they might as well report entirely different interventions [23]. Achieving improved communication thus requires consensus around key terms and must engage the perspectives of both QI practitioners and more research-oriented stakeholders. In this study, we used online expert panel methods to attempt to engage both stakeholder types.

LR and SSS used their professional networks to invite Institute for Healthcare Improvement faculty, members of the editorial boards from leading QI research journals, evaluators of Robert Wood Johnson Foundation (RWJF) quality programs, and RAND patient safety and QI experts to participate in this study. Participants were asked to nominate other QI professionals and health services researchers. Out of 259 professionals contacted, 119 agreed to participate.

As part of the agreement to participate, we asked participants to self-identify themselves as primarily practitioners, primarily researchers, or both equally. We used stratified random sampling to assign participants to one of two small (n1 = 19, n2 = 21) or two large (n3 = 40, n4 = 39) panels and balance panels with regard to the number of researchers and practitioners. Participants were not informed about the size of their panels or the total number of panels. While participants knew that the study would consist of three phases, consistent with the RAND/UCLA Appropriateness Method manual [3], we did not explicitly instruct panelists to develop consensus. The study was determined to be exempt from the IRB review by the RAND's Human Subjects' Protection Committee.

ExpertLens is one system for conducting online expert panels. It was created by an interdisciplinary team of researchers at the RAND Corporation [24]. It uses a modified-Delphi elicitation structure and replaces traditional face-to-face meetings with asynchronous, unmoderated online discussion boards. The online process used in this study consisted of three phases; each phase was limited to one week. In Phase I, panelists rated 11 features of CQI initiatives on four dimensions, including the importance of a feature for a definition of CQI. The initial features came from earlier consensus work that used a traditional expert panel process [23], but study participants could also add other important features they felt were missing. In Phase II, panelists saw their own responses as well as the medians and quartiles of their panel responses to Phase I questions. They also participated in asynchronous, anonymous, and unmoderated online discussions with the same group of colleagues in each panel. Phase II was the feedback phase that allowed panelists to review the panel response by looking at measures of central tendency and dispersion and discuss their ideas anonymously, without being influenced by the status of other panelists [12]. In Phase III, panelists re-answered Phase I questions. In the optional post-completion survey, participants rated additional features mentioned in Phase I and answered questions about their experiences participating in the online expert panel.

In line with consensus methods guidelines, the definitions of importance of a particular CQI feature, as well as of the level of consensus, were determined in advance [4]. We considered a feature to be important for a CQI initiative if a panelist rated it as > 3 on a 5-point importance scale. We also used an a priori definition of consensus. If more than two-thirds (> 66.6%) of panelists agreed on the importance of a particular feature, we argued that consensus was achieved [25]. We used mean absolute deviation from the median (MAD-M) as a measure of disagreement within panels and treated a reduction in its values between phases as a sign of increased consensus [3, 26]. MAD-M is the preferred measure of disagreement in expert panels that has been widely used since 1980s when the RAND/UCLA Appropriateness Method was originally created. It is a good measure of disagreement because it is not affected by extreme observations and measures deviation from the median, a measure of central tendency typically used in consensus development and in this study [26]. Finally, we used four-way kappa to assess agreement between panels, treating the data as ordinal and using a weight matrix comprising the squared deviations between scores [27].

Results

Participation

Out of 119 individuals who expressed interest in participating in the ExpertLens process, 77% completed Phase I (Table 1). Participation rates varied from 63% in a small panel to 83% in a large panel. In total, 62% of Phase I participants contributed to Phase II discussions. 66% of those invited to the study, and 87% of Phase I participants, also participated in Phase III. There was no statistically significant difference in participation levels for Phase I and III between the panels.

Table 1 Participation in All Phases of the Study

In each panel, between 50% and 76% of Phase I participants contributed to Phase II discussions (Table 1). Discussion participation rates and the average number of comments per participant did not vary significantly across the panels in relationship to panel size. One of the large panels (Panel C) had the most active discussion, with 76% of panel members participating by posting 16 discussion threads with 89 comments (On average, each Panel C participant initiated .64 discussion threads and made 3.56 comments). Table 2 illustrates the type of discussion the groups carried out by showing Panel C's discussion of Feature 5 "Use of evidence"-- one of the eleven potential CQI features the panelists assessed.

Table 2 A Sample Discussion Thread: Feature 5 "Use of Evidence"

Consensus

Although participants were not instructed to reach consensus, all panels were able to do so on four out of eleven features in Phase I; three panels agreed on three additional features, and two panels on one further feature (Table 3). Three features were not judged as important in any panel. In Phase III, after group feedback and discussion, all panels agreed on the importance of only three of the four features identified in Phase I; three panels agreed on five other features (Table 3). Of the features that were not judged as important by any panel in Phase I, one feature (#5) was then deemed important by two panels, following Phase II feedback and discussion. Table 2 illustrates comments made about this feature in Panel C. While some differences in opinion about the importance of Feature 5 still exist in Panel C, participants agreed that this feature is important to the definition of CQI in Phase III. Two features, however, were still not deemed important by any panel.

Table 3 Feature Importance to the Definition of a CQI Initiative and Agreement between Panels

The MAD-M values for features where consensus was reached ranged from .25 to 1.21 in Phase I and from .1 to .89 in Phase III. In 36 out of 43 cases2 (84%), the MAD-M values decreased between Phase I and Phase III. Figure 1 graphically depicts the ratio of MAD-M values in Phase III relative to Phase I; a value below 1.0 illustrates decrease in disagreement. Results suggest that panelists' answers clustered more around the group median after statistical feedback and discussion, meaning that agreement among panelists increased between Phase I and Phase III.

Figure 1
figure 1

Distribution of Phase III/Phase I MAD-M Ratios. Figure 1 graphically depicts the ratio of MAD-M values in Phase III relative to Phase I; a value below 1.0 illustrates decrease in disagreement.

Replication

By design, we used stratified random sampling and identical elicitation procedures to test for reproducibility of panel conclusions. Our Phase III results show some variation between panels (See Table 3). For instance, in Panel D, eight features were rated as important for the definition of CQI. For Panels A and C, however, the definition of CQI consisted of seven features; yet not all of them were the same. Finally, for Panel B, the CQI definition consisted of only six features.

The four-way kappa, which measures the level of agreement between the four panels, was equal to .36 and thus fell within the .20-.40 range that typically illustrates fair agreement [28, 29]. Agreement between two larger panels was slightly higher (pairwise kappa = .38) than that between two smaller panels (pairwise kappa = .24). Panels A and D, however, had a 100% agreement in Phase III.

Nonetheless, Table 3 shows that all four panels agreed on the status of five out of eleven CQI features by uniformly considering them either important or not important. Five other features were endorsed as important by three panels; and one additional feature was endorsed by two panels. Therefore, this finding supports the stance that three features endorsed by all four panels should be considered important to the definition of CQI, two features that were not rated as important by any of the panels should not be discussed further, and five features endorsed by three panels require additional discussions.

Satisfaction

While there was some variation, participants were generally satisfied with the ExpertLens process (Table 4). All satisfaction questions had 7-point response scales, where 1 = Strongly Disagree, 2 = Disagree, 3 = Slightly Disagree, 4 = Neutral, 5 = Slightly Agree, 6 = Agree, and 7 = Strongly Agree. The mean values were rounded to the nearest whole number. Although panelists agreed slightly that participation in the exercise was interesting (mean = 5.31, sd = 1.32) and the survey instrument was easy to use (mean = 4.78, sd = 1.40), they had a neutral opinion on whether participation in this exercise was frustrating (mean = 3.57, sd = 1.80). CQI practitioners were significantly less likely to think that the instrument was easy to use, compared to researchers or those self-characterized as both (p = .025).

Table 4 Results of the Post-Completion Survey (N = 76)

Participants expressed generally positive opinions about the Phase II online discussion and the value it brought to the online expert elicitation process. Panelists agreed that they were comfortable expressing their views in the discussions (mean = 5.51; sd = 1.23). They also agreed slightly that the exercise brought out the opinions they had not considered (mean = 4.76; sd = 1.49) and that discussions gave them a better understanding of issues (mean = 4.61; sd = 1.51). Finally, panelists' opinions were close to neutral on whether panel members debated each others' viewpoints (mean = 4.47; sd = 1.41), whether discussions caused them to revise their original responses (mean = 4.21; sd = 1.55), and whether they had trouble following discussions (mean = 3.86; sd = 1.69).

While satisfaction with the online process and discussions varied slightly between the panels, there typically was no statistically significant panel size effect. The only exception was that panelists in larger panels were significantly more likely than those in smaller panels to agree that participants debated each others' viewpoints during discussions (mean = 4.74, sd = 1.26 vs. mean = 3.63, sd = 1.54; p = .002).

Finally, participants said that they would likely participate in a similar online panel in the future (mean = 5.09; sd = 1.70); researchers, however, were significantly more likely than the other two groups of panelists to express their willingness to participate (p = .009).

Discussion

The study was designed to explore the feasibility of conducting online expert panels and to examine experimental replicability of their findings. We focused specifically on the issues of expert participation, consensus development, agreement across panels, and participant experiences. We also investigated the effects of the panel size on participation rates and satisfaction with the ExpertLens process used to conduct online panels. Our exploratory study shows that online expert panels may be a practical approach to engaging large and diverse groups of stakeholders in finding consensus on key language issues within an evolving field, such as QI. It also supports the results of previous research showing that virtual panels may potentially expedite the elicitation process, minimize burden on participants, allow the conduct of larger and more diverse panels, and include geographically distributed participants [8, 9].

Overall, CQI stakeholders demonstrated strong commitment to improving CQI language, and the study participation rate was high, with 66% of participants, who did not receive any honoraria, engaging in all phases of the online elicitation. This number compares favorably to both the 45-50% typically expected participation rate in a traditional Delphi study [20] and the 49% participation rate in a recent online Delphi with just two questions phases [8].

Moreover, our panelists generally expressed positive attitudes towards an online approach, finding the elicitation process interesting, the online system easy to use, and the discussion component helpful for improving their understanding of the issues and clarifying their positions. Typical average satisfaction scores were equal to, or above, "agreed slightly" on positively worded satisfaction items.

Although participation levels did not vary significantly across the panels of different size, the perception of a two-way information exchange, as measured by the post-completion survey questions, was significantly higher in larger than in smaller panels. Therefore, the number of invited participants in online consensus panels may need to be higher than in traditional panels to ensure that the critical mass of participants is achieved not only during the questions but also during the discussion phases [30]. On the one hand, inviting a larger number of panelists may increase the panel's representativeness [12] and allow for exploring the differences not only between, but also within stakeholder groups. On the other hand, our largest panel (n = 40) was still of a size we considered reasonable for engaging a high percent of panelists in the discussion; having a very large number of panelists might have a deleterious effect on discussion participation.

Finally, our study suggests that the online approach can be used to conduct multiple parallel panels to test for the reproducibility of panel conclusions. In this study, the level of agreement between panels was fair as measured by four-way kappa [28, 29], and roughly a quarter of all potential features was judged important by all four panels. The comparison across panels is crucial information when evaluating the potential replicability of panel decisions and provides an indication of the degree of confidence in the robustness of decisions across panels. By the end of Phase III, all four panels agreed on the status of five out of eleven CQI features. The data feedback and discussion features of the online system appeared to reduce MAD-M values (i.e., increase the level of agreement) between Phase I and Phase III without forcing participants into consensus. By virtue of answering the same questions twice and discussing their perspectives, all four panels agreed on the importance of three out of eleven features to the definition of CQI, and on the lack of importance of two other features.

While our study illustrates the feasibility of conducting online expert panels, it, nonetheless, has some limitations. In terms of panel size, our results reflect only a modest panel size range; we did not test extremely small or large sizes. Furthermore, we do not know how well we represented QI researchers versus QI practitioners in our sample, because we only can categorize those who actually signed up to participate; however, our Phase I response rate of 77% does not suggest a high level of bias in this regard. Finally, in terms of achieved participation rates and panel results, the findings may primarily reflect the dedication of CQI stakeholders and may not apply to other topics and applications. Previous studies using this online approach [13], however, also indicate that this process can help obtain input from large, diverse, and geographically dispersed groups of stakeholders who try to foster exchange and find consensus on often controversial topics and policy questions. Nonetheless, further experimental research is necessary to validate these findings.

Conclusions

In summary, our study illustrates the feasibility of conducting online expert panels and explores the replicability of panel findings. Online panels may be helpful for engaging large and diverse groups of stakeholders for defining agreement on controversial subjects, such as refining and understanding QI language. Additional tests of ExpertLens and other online panel tools, however, should further determine their acceptability and validity as an alternative, or an addition, to a face-to-face panel process for a range of health services research topics and provide detailed information about the best ways to configure and carry out online expert panels.

Endnotes

1. This paper explores the feasibility of the online panel approach; the results on consensus on specific defining features of CQI will be reported elsewhere.

2. By case we mean a feature in each group. We asked questions about 11 features in 4 panels. In Panel C, one question was not asked in Phase I. Therefore, we had 43 cases total in Phase I.

References

  1. Jones J, Hunter D: Qualitative research: consensus methods for medical and health services research. British Medical Journal. 1995, 311: 376-380. 10.1136/bmj.311.7001.376.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  2. Fink A, Kosecoff JB, Chassin MR, Brook RH: Consensus Methods: Characteristics and Guidelines for Use. 1991, Santa Monica, CA: RAND, vol. N-3367-HHS

    Google Scholar 

  3. Fitch K, Bernstein SJ, Aguilar MD, Burnand B, LaCalle JR, Lazaro P, Loo Mvh, McDonnell J, Vader JP, Kahan JP: RAND/UCLA Appropriateness Method (RAM). 2001, Santa Monica: RAND Corporation, 109-

    Google Scholar 

  4. Fink A, Kosecoff J, Chassin M, Brook RH: Consensus methods: characteristics and guidelines for use. American Journal of Public Health. 1984, 74: 979-983. 10.2105/AJPH.74.9.979.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  5. Black N, Murphy M, Lamping D, McKee M, Sanderson C, Askham J, Marteau T: Consensus development methods: a review of best practice in creating clinical guidelines. Journal of health services research & policy. 1999, 4: 236-248.

    CAS  Google Scholar 

  6. McKenna HP: The Delphi technique: a worthwhile research approach for nursing?. Journal of Advanced Nursing. 1994, 19: 1221-1225. 10.1111/j.1365-2648.1994.tb01207.x.

    Article  CAS  PubMed  Google Scholar 

  7. Raine R, Sanderson C, Black N: Developing clinical guidelines: a challenge to current methods. British Medical Journal. 2005, 331: 631-10.1136/bmj.331.7517.631.

    Article  PubMed  PubMed Central  Google Scholar 

  8. Elwyn G, O'Connor A, Stacey D, Volk R, Edwards A, Coulter A, Thomson R, Barratt A, Barry M, Bernstein S: Developing a quality criteria framework for patient decision aids: online international Delphi consensus process. British Medical Journal. 2006, 333: 417-423. 10.1136/bmj.38926.629329.AE.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Bowles KH, Holmes JH, Naylor MD, Liberatore M, Nydick R: Expert consensus for discharge referral decisions using online Delphi. AMIA Annual Symposium Proceedings. 2003, 2003: 106-109.

    PubMed Central  Google Scholar 

  10. Pagliari C, Grimshaw J, Eccles M: The potential influence of small group processes on guideline development. Journal of Evaluation in Clinical Practice. 2001, 7: 165-173. 10.1046/j.1365-2753.2001.00272.x.

    Article  CAS  PubMed  Google Scholar 

  11. Dubrovsky VJ, Kiesler S, Sethna BN: The equalization phenomenon: Status effects in computer-mediated and face-to-face decision-making groups. Human-Computer Interaction. 1991, 6: 119-146. 10.1207/s15327051hci0602_2.

    Article  Google Scholar 

  12. Murphy MK, Black NA, Lamping DL, McKee CM, Sanderson CFB, Askham J: Consensus development methods, and their use in clinical guideline development. Health Technology Assessment. 1998, 2:

    Google Scholar 

  13. Snyder-Halpern R, Thompson C, Schaffer J: Comparison of mailed vs. Internet applications of the Delphi technique in clinical informatics research. Proceedings of the AMIA Symposium. 2000, 809-813.

    Google Scholar 

  14. Brown R: Group processes: Dynamics within and between groups. 2000, Blackwell Pub

    Google Scholar 

  15. Wainfan L, Davis PK: Challenges in virtual collaboration: Videoconferencing, audioconferencing, and computer-mediated communications. 2004, Santa Monica: RAND Corporation

    Google Scholar 

  16. Turoff M, Hiltz SR: Computer-based Delphi processes. Gazing into the oracle: the Delphi method and its application to social policy and public health. Edited by: Adler M, Ziglio E. 1996, Jessica Kingsley Publishers, 56-89.

    Google Scholar 

  17. Vonderwell S: An examination of asynchronous communication experiences and perspectives of students in an online course: A case study. The Internet and Higher Education. 2003, 6: 77-90. 10.1016/S1096-7516(02)00164-1.

    Article  Google Scholar 

  18. Keeney S, Hasson F, McKenna H: A critical review of the Delphi technique as a research methodology for nursing. International Journal of Nursing Studies. 2001, 38: 195-200. 10.1016/S0020-7489(00)00044-4.

    Article  CAS  PubMed  Google Scholar 

  19. Shekelle P, Kahan J, Bernstein S, Leape L, Kamberg C, Park R: The reproducibility of a method to identify the overuse and underuse of medical procedures. New England Journal of Medicine. 1998, 338: 1888-1895. 10.1056/NEJM199806253382607.

    Article  CAS  PubMed  Google Scholar 

  20. Jillson IA: The national drug-abuse policy Delphi. The Delphi Method: Techniques and Applications. Edited by: Linstone H, Turoff M. 2002, 119-154.

    Google Scholar 

  21. Rubenstein L, Hempel S, Farmer M, Asch S, Yano E, Dougherty D, Shekelle P: Finding order in heterogeneity: types of quality-improvement intervention publications. Quality and Safety in Health Care. 2008, 17: 403-408. 10.1136/qshc.2008.028423.

    Article  CAS  PubMed  Google Scholar 

  22. Danz M, Rubenstein L, Hempel S, Foy R, Suttorp M, Farmer M, Shekelle P: Identifying quality improvement intervention evaluations: is consensus achievable?. Quality and Safety in Health Care. 2010, 19: 279-283. 10.1136/qshc.2009.036475.

    Article  CAS  PubMed  Google Scholar 

  23. O'Neill SM, Hempel S, Lim Y-W, Danz M, Foy R, Suttorp MJ, Shekelle PG, Rubenstein LV: Identifying continuous quality improvement publications: What makes an improvement intervention "CQI"?. BMJ Quality and Safety. 2011, doi:10.1136/bmjqs.2010.050880

    Google Scholar 

  24. Dalal SR, Khodyakov D, Srinivasan R, Straus SG, Adams J: ExpertLens: A system for eliciting opinions from a large pool of non-collocated experts with diverse knowledge. Technological Forecasting & Social Change. 2011, 78: 1426-1444. 10.1016/j.techfore.2011.03.021.

    Article  Google Scholar 

  25. Vakil N, van Zanten SV, Kahrilas P, Dent J, Jones R: The Montreal definition and classification of gastroesophageal reflux disease: a global evidence-based consensus. American Journal of Gastroenterology. 2006, 101: 1900-1920. 10.1111/j.1572-0241.2006.00630.x.

    Article  PubMed  Google Scholar 

  26. Hutchings A, Raine R, Sanderson C, Black N: An experimental study of determinants of the extent of disagreement within clinical guideline development groups. Quality and Safety in Health Care. 2005, 14: 240-245. 10.1136/qshc.2004.013227.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  27. Conger AJ: Integration and generalization of kappas for multiple raters. Psychological Bulletin. 1980, 88: 322-328.

    Article  Google Scholar 

  28. Landis JR, Koch GG: The measurement of observer agreement for categorical data. Biometrics. 1977, 33: 159-174. 10.2307/2529310.

    Article  CAS  PubMed  Google Scholar 

  29. Campbell S, Shield T, Rogers A, Gask L: How do stakeholder groups vary in a Delphi technique about primary mental health care and what factors influence their ratings?. Quality and Safety in Health Care. 2004, 13: 428-434. 10.1136/qshc.2003.007815.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  30. Jones Q, Ravid G, Rafaeli S: Information overload and the message dynamics of online interaction spaces: A theoretical model and empirical exploration. Information Systems Research. 2004, 15: 194-210. 10.1287/isre.1040.0023.

    Article  Google Scholar 

Pre-publication history

Download references

Acknowledgements

This study was supported by the Robert Wood Johnson Foundation (Grant ID 65113: Advancing the science of continuous quality improvement: A framework for identifying, classifying and evaluating continuous quality improvement studies and Grant ID 67890: Providing a framework for the identification, classification, and evaluation of quality improvement initiatives) and the RAND Corporation, with additional funding provided by the Veterans Health Administration.

We would like to thank all study participants for their valuable contributions. In addition, we would like to thank John Adams for assistance in designing the study, Jeremy Miles for advising on data analysis, Aneesa Motala for administrative support, Brian McInnis for technical assistance with panel administration, and Mary Haines for comments on earlier drafts of this manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dmitry Khodyakov.

Additional information

Competing interests

DK and SD are developers of the ExpertLens system. The RAND Corporation, a non-profit research institution, is the registered owner of the ExpertLens trademark.

Authors' contributions

All authors have contributed substantially to the manuscript. DK contributed to the study design, was responsible for data collection, performed the data analysis, and wrote the first draft of the manuscript. SH contributed to the study design and data collection, advised on data analysis, and contributed to the manuscript. LR contributed to the study design and manuscript writing. SO led participant randomization, was involved in the study design and data collection processes, and commented on the manuscript. PS, RF, SSS, MD, and SS were involved in the conception and design of the study and the data analysis strategy, provided advice on data interpretation, and contributed to the revisions of the manuscript. All authors have approved the final version of the manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Khodyakov, D., Hempel, S., Rubenstein, L. et al. Conducting Online Expert panels: a feasibility and experimental replicability study. BMC Med Res Methodol 11, 174 (2011). https://doi.org/10.1186/1471-2288-11-174

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1471-2288-11-174

Keywords