Email updates

Keep up to date with the latest news and content from BMC Medical Education and BioMed Central.

Open Access Research article

Development and psychometric testing of an instrument to evaluate cognitive skills of evidence based practice in student health professionals

Lucy K Lewis1*, Marie T Williams1 and Timothy S Olds2

Author Affiliations

1 School of Health Sciences, University of South Australia, GPO Box 2471, Adelaide, SA, 5001, Australia

2 Health and Use of Time (HUT) Group, Sansom Institute for Health Research, University of South Australia, GPO Box 2471, Adelaide, SA, 5001, Australia

For all author emails, please log on.

BMC Medical Education 2011, 11:77  doi:10.1186/1472-6920-11-77

Published: 3 October 2011

Abstract

Background

Health educators need rigorously developed instruments to evaluate cognitive skills relating to evidence based practice (EBP). Previous EBP evaluation instruments have focused on the acquisition and appraisal of the evidence and are largely based in the medical profession. The aim of this study was to develop and validate an EBP evaluation instrument to assess EBP cognitive skills for entry-level health professional disciplines.

Methods

The Fresno test of competence in evidence based medicine was considered in the development of the 'Knowledge of Research Evidence Competencies' instrument (K-REC). The K-REC was reviewed for content validity. Two cohorts of entry-level students were recruited for the pilot study, those who had been exposed to EBP training (physiotherapy students, n = 24), and who had not been exposed to EBP training (human movement students, n = 76). The K-REC was administered to one cohort of students (n = 24) on two testing occasions to evaluate test-retest reliability. Two raters independently scored the first test occasion (n = 24) to evaluate the inter-rater reliability of the marking guidelines. Construct validity was assessed by comparison of the two groups, 'exposed' and 'non-exposed', and the percentage of students achieving a 'pass' score in each of these groups. Item difficulty was established.

Results

Among the 100 participants (24 EBP 'exposed', and 76 EBP 'non-exposed' students), there was a statistically significant (p < 0.0001) difference in the total K-REC scores. The test-retest and inter-rater reliability of the individual items and total scores ranged from moderate to excellent (measured by Cohen's Kappa and ICC, range: 0.62 to perfect agreement).

Conclusions

The K-REC instrument is a valid and reliable evaluation instrument of cognitive skills of EBP in entry-level student health professionals. The instrument is quick to disseminate and easy to score, making it a suitable instrument for health educators to employ to evaluate students' knowledge of EBP or in the evaluation of entry-level EBP training.