Open Access Research article

Validity evidence and reliability of a simulated patient feedback instrument

Claudia Schlegel1*, Ulrich Woermann2, Jan-Joost Rethans3 and Cees van der Vleuten4

Author Affiliations

1 Skillslab, Berner Bildungszentrum Pflege, Reichenbachstrasse 118, 3004 Berne, Switzerland

2 Institute of Medical Education, Education and Media Unit, Medical Media Production, University of Bern, Berne, Konsumstrasse 13, 3010 Berne, Switzerland

3 Skillslab, Faculty of Health, Medicine & Life Sciences, Maastricht University, Netherlands, PO Box 616, 6200 MD Maastricht, The Netherlands

4 Department of Educational Development and Research, University of Maastricht, Netherlands, P.O. Box 616, University of Maastricht, 6200 MD Maastricht, The Netherlands

For all author emails, please log on.

BMC Medical Education 2012, 12:6  doi:10.1186/1472-6920-12-6

Published: 27 January 2012

Abstract

Background

In the training of healthcare professionals, one of the advantages of communication training with simulated patients (SPs) is the SP's ability to provide direct feedback to students after a simulated clinical encounter. The quality of SP feedback must be monitored, especially because it is well known that feedback can have a profound effect on student performance. Due to the current lack of valid and reliable instruments to assess the quality of SP feedback, our study examined the validity and reliability of one potential instrument, the 'modified Quality of Simulated Patient Feedback Form' (mQSF).

Methods

Content validity of the mQSF was assessed by inviting experts in the area of simulated clinical encounters to rate the importance of the mQSF items. Moreover, generalizability theory was used to examine the reliability of the mQSF. Our data came from videotapes of clinical encounters between six simulated patients and six students and the ensuing feedback from the SPs to the students. Ten faculty members judged the SP feedback according to the items on the mQSF. Three weeks later, this procedure was repeated with the same faculty members and recordings.

Results

All but two items of the mQSF received importance ratings of > 2.5 on a four-point rating scale. A generalizability coefficient of 0.77 was established with two judges observing one encounter.

Conclusions

The findings for content validity and reliability with two judges suggest that the mQSF is a valid and reliable instrument to assess the quality of feedback provided by simulated patients.