Skip to main content
  • Research article
  • Open access
  • Published:

Teaching of evidence-based medicine to medical students in Mexico: a randomized controlled trial

Abstract

Background

Evidence-Based Medicine (EBM) is an important competency for the healthcare professional. Experimental evidence of EBM educational interventions from rigorous research studies is limited. The main objective of this study was to assess EBM learning (knowledge, attitudes and self-reported skills) in undergraduate medical students with a randomized controlled trial.

Methods

The educational intervention was a one-semester EBM course in the 5th year of a public medical school in Mexico. The study design was an experimental parallel group randomized controlled trial for the main outcome measures in the 5th year class (M5 EBM vs. M5 non-EBM groups), and quasi-experimental with static-groups comparisons for the 4th year (M4, not yet exposed) and 6th year (M6, exposed 6 months to a year earlier) groups. EBM attitudes, knowledge and self-reported skills were measured using Taylor’s questionnaire and a summative exam which comprised of a 100-item multiple-choice question (MCQ) test.

Results

289 Medical students were assessed: M5 EBM=48, M5 non-EBM=47, M4=87, and M6=107. There was a higher reported use of the Cochrane Library and secondary journals in the intervention group (M5 vs. M5 non-EBM). Critical appraisal skills and attitude scores were higher in the intervention group (M5) and in the group of students exposed to EBM instruction during the previous year (M6). The knowledge level was higher after the intervention in the M5 EBM group compared to the M5 non-EBM group (p<0.001, Cohen's d=0.88 with Taylor's instrument and 3.54 with the 100-item MCQ test). M6 Students that received the intervention in the previous year had a knowledge score higher than the M4 and M5 non-EBM groups, but lower than the M5 EBM group.

Conclusions

Formal medical student training in EBM produced higher scores in attitudes, knowledge and self-reported critical appraisal skills compared with a randomized control group. Data from the concurrent groups add validity evidence to the study, but rigorous follow-up needs to be done to document retention of EBM abilities.

Peer Review reports

Background

Evidence-based medicine (EBM) has been defined as “the integration of the best research evidence with our clinical expertise and our patient’s unique values and circumstances”, and it has emerged as a core competency necessary for all healthcare professionals [13]. Its fundamental principles are: translation of uncertainty to an answerable clinical question, systematic retrieval of the best evidence available, critical appraisal for validity, relevance and applicability, use of results in practice and evaluation of its performance by the healthcare provider [4].

Several organizations, including the Institute of Medicine in the United States and the World Federation for Medical Education, have advocated the implementation of EBM educational interventions in medical under and postgraduate training [2, 5].

The concepts related to EBM and its educational implications have disseminated rapidly in the last decade, and this change needs to be accompanied with strong educational research to document its effectiveness. The challenges of teaching EBM and the paucity of rigorous educational research publications have prompted some medical educators to question the evidence of EBM teaching effectiveness [6]. Nonetheless, the foundations of EBM that support clinical decision making are intuitively attractive to many clinicians and educators, since it integrates the educational process with clinical practice [4].

The quality of the evidence about EBM education is heterogeneous, as has been described in several editorials, narrative and systematic reviews [711]. The majority of reviews have included mostly studies in postgraduate health professionals, and some have included studies in both post and undergraduate students. Green reviewed 18 reports, mostly resident-directed small-group seminars with the objective of improving critical appraisal skills [12]. The most commonly used outcome measure was a multiple-choice exam, and 72% used a traditional journal club format as teaching strategy. Only seven of the 18 studies included in Green’s review analyzed the effectiveness of the intervention, five of these had some type of control group and only one was a randomized study. Just two studies used an outcome measure that had validity evidence, and measurement of change in behavior used only self-report in all five papers. The impact of the intervention was focused mainly on critical appraisal, and ranged from no effect to 23% absolute increase in scores [12].

The Cochrane Collaboration systematic review on the subject of teaching critical appraisal skills in health care, which excluded medical students, found three studies that met stringent pre-specified methodological criteria. These articles reported statistically significant improvements in participants' knowledge in domains of critical appraisal in two of the three studies [9]. Another systematic review by Coomarasamy focused on postgraduate clinicians, and found significant effects of EBM educational interventions in knowledge, and more limited in attitudes, skills and behavior [10, 11].

Despite the increasing number of medical school and postgraduate programs that have introduced EBM in their curricula, most of the information about it has been reported as observational data and descriptive studies in the medical literature, or as unpublished observations that are disseminated in medical meetings or informal venues. There are few randomized controlled educational trials about EBM training effectiveness, and the majority have been done in residents or practicing physicians [914].

Undergraduate medical students can be a receptive population to EBM concepts, and they will be the practicing clinicians and clinical teachers in the future. There are several published studies that describe medical schools’ experiences introducing EBM in their curriculum and teaching these concepts to undergraduates, with variable outcomes [1519]. This curricular change has not occurred in many of their developing country counterparts, with few published reports of the implementation of EBM curricula in these settings [2023]. There is a need to implement EBM educational interventions in developing countries medical schools’ curricula, and to assess their impact with appropriate educational research designs.

The purpose of this study was to assess the educational effectiveness (attitudes, knowledge and skills) of an EBM course in undergraduate medical students.

Methods

Setting

The Mexican Army medical school trains career physicians for the national military healthcare system, and is located in Mexico City. It has a six year program, with a traditional curriculum: two years of basic sciences, three years of clinical sciences, and the sixth year is an internship period in the hospital. The school is a public institution funded by the federal government. Each yearly class is composed of about one hundred students, mostly middle- or low-socioeconomic class Hispanics.

Overall study design and participants

Main outcomes and subjects

The core portion of the study was a randomized post-test only control group design, for the main outcomes: attitudes, knowledge and skills in EBM. Fifth year medical students were randomized in two groups, one of which was subjected to the educational intervention during the first semester of the academic year (M5 EBM), while the other half (M5 non-EBM) had an Aviation Medicine course (Figure 1). The rest of the 5th year curriculum was similar in that semester. In the second semester the control group had the EBM course and the intervention group had the Aviation Medicine course. The randomization was done by the medical school with a computer generated list, using the block randomization method with blocks of two to ensure equal sample sizes [24].

Figure 1
figure 1

Flow diagram of study participants. Flow diagram summarizing the groups of medical students and the progress of their participation in the study. M4 non-EBM=4th year students with no evidence-based medicine training; M5 EBM and M5 non-EBM=5th year medical students with and without the evidence-based medicine course; M6 EBM=6th year students exposed to the evidence-based medicine course during the year prior to assessment.

Simultaneous validation

Quasi-experimental static-groups comparisons were added besides the randomized trial, with a more junior group of 4th year students not yet exposed to the EBM intervention (M4 non-EBM) and a more senior group in 6th year that had the EBM course during the previous year (M6 EBM). The 4th year students had courses on Medical Informatics, Statistics, Research Methodology and Epidemiology, which are taught by information technology professionals, statisticians, epidemiologists and basic-science researchers, most of them with no clinical background. The 6th year students were in the hospital internship and all of them had the EBM course during the previous year (half of them six months and half one year before the evaluation). These comparison groups were included to acquire more information from our population in concurrent groups and increase the validity of the study, addressing the history, maturation and contamination threats to validity and exploring the potential EBM knowledge in more senior students [2527] (Figure 1).

The outcomes were measured in all groups at the end of the first semester of the academic year, after the EBM course ended. All the fifth, fourth and sixth year students were asked to participate in the study, about one hundred students per class.

Intervention

The educational intervention was a one semester EBM course formally included in the medical school curriculum, with 14 two-hour weekly sessions. The course faculty were six professors trained in EBM teaching, all board-certified physicians with clinical practice, one of them with a postgraduate degree in health professions education and faculty development training in EBM education at McMaster University Faculty of Health Sciences in Canada. The course faculty had more than six years of experience teaching EBM to undergraduate medical students, residents of several specialties, and providing faculty development EBM workshops to teachers of several medical specialties. The EBM course teachers were not involved in the training of the 4th year students, but they participated in the training of the 6th year interns. The EBM program was linked with the internship program and the residency programs in the hospital, through the medical school curricular committee and the University Postgraduate Studies Division.

The course instructional strategies included large-group interactive sessions, small-group problem-solving activities, individual and group assignments, and informatics laboratory sessions. Traditional EBM resources were used as course bibliography, including Straus’ book [1] and an EBM text in Spanish written by the course professors [28]. The content and learning objectives of the course are outlined below.

  1. 1.

    Clinical decision making in medicine

     • List and define the main difficulties for objective decision making in medicine as defined by Eddy

     • Describe the components of a decision in medicine as defined by Eddy

     • Apply the concepts of anatomy of a decision as defined by Eddy in the analysis of a clinical problem

  2. 2.

    Uncertainty and probability in medicine

     • Define the concepts of uncertainty, probability and odds

     • Understand the relevance of uncertainty in clinical practice

     • Understand the limitations of personal experience in the estimation of probability, as related to diagnosis

     • Define the heuristics used in medicine (representativeness, availability, anchor and adjustment) and list the cognitive errors a clinician can make when misapplying them

     • Apply the concepts of heuristics in new clinical problems, and discuss the effects of their inappropriate use

  3. 3.

    Bayes’ theorem

     • Define Bayes’ theorem

     • Define pre-test and post-test probability

     • Define the concepts of diagnostic and therapeutic threshold

     • Explain the utility of Bayes' theorem in clinical medicine, mainly in diagnosis

     • List the limitations of Bayes' theorem in clinical practice

     • Apply Fagan’s nomogram to use Bayes' theorem in a diagnostic problem

     • Apply the concepts of diagnostic and therapeutic threshold to a clinical problem

  4. 4.

    Principles of Evidence Based Medicine

     • Describe the history and origin of EBM

     • Define the concept of EBM

     • List the five steps of EBM, and apply them in a clinical problem

     • Explain the importance of EBM in clinical practice

  5. 5.

    Reflective medical practice

     • Define the concept of reflection and reflective practitioner

     • Define reflection-in-action and reflection-on-action

     • Apply these concepts in a clinical scenario

  6. 6.

    Clinicians’ information needs

     • Understand the magnitude of physician information needs

     • Understand the literature that describe how clinicians underestimate their information needs

     • Define the percentage of occasions when clinicians recognize and act upon perceived information needs

  7. 7.

    Clinical questions

     • Define the concepts of background and foreground questions

     • Understand the advantages of structuring questions generated during clinical work

     • List the four components of a foreground clinical question (PICO)

     • Apply these concepts in developing questions from clinical problems

     • List the types of clinical questions (diagnosis, therapy, prognosis, harm, etiology)

  8. 8.

    Sources of biomedical information

     • List the different sources of biomedical information available

     • Identify the advantages and disadvantages of each source (textbooks, paper and electronic journals, original research papers)

     • Understand the origin, development, cost, and availability of sources of information

  9. 9.

    The Cochrane Collaboration

     • Describe the history and origin of the Cochrane Collaboration (CC)

     • List the components of the Cochrane Library, and the sources where it’s available

     • Understand the mission, logistics and work of the CC

     • Perform effective searches for systematic reviews on the Cochrane Library

     • Understand the advantages and limitations of the CC

     • Use the Cochrane Library to solve a clinical problem

  10. 10.

    Search strategies to find the best medical scientific evidence

     • List the main medical databases, and identify their relevance and location

     • Describe the history of Medline

     • Define MeSH terms, Boolean operators, search engine

     • Design search strategies to find valid evidence

     • Use PubMed Clinical Queries

     • Perform effective searches of scientifically valid papers using PubMed, Cochrane Library, OVID Core Medical Library

     • Understand the advantages and disadvantages of searching the different electronic medical databases and the Internet general purpose searching engines

  11. 11.

    Critical appraisal of the medical literature: Users&#x2019; Guides to the Medical Literature

     • Describe the origin and history of the Users’ Guides series to appraise the medical literature

     • List and understand the different hierarchies of evidence, study designs, grades of evidence

     • Understand the relevance of using the original medical literature to solve clinical problems

     • List and understand the three different steps to appraise a research article: internal validity, magnitude of the results and external validity

  12. 12.

    How to appraise an article about therapy

     • Describe the criteria for internal validity of a therapy article

     • Define randomized controlled trial, bias and random error, allocation concealment, double-blind, intention-to-treat analysis, odds ratio, relative risk, relative risk reduction, absolute risk reduction, number needed to treat, confidence intervals, p values, power and sample size, type I and II errors

     • Understand the importance of all the previously defined concepts to apply in a therapy article

     • Calculate OR, RR, RRR, ARR and NNT from a published therapy article

     • Use a therapy article to solve a clinical problem

     • Understand the concepts of external validity of a research paper, related to therapy

  13. 13.

    How to appraise an article about a diagnostic test, the Rational Clinical Examination Series

     • Describe the criteria for internal validity of a diagnostic test article

     • Define pre-test and post-test probability, sensitivity, specificity, likelihood ratios, positive and negative predictive value, accuracy

     • Understand the importance of all the previously defined concepts to apply a diagnosis article

     • Calculate sensitivity, specificity, likelihood ratios from a published diagnosis article

     • Use a diagnosis article to solve a clinical problem

     • Understand the concepts of external validity of a research paper, related to diagnosis

     • Describe the origin and evolution of the Rational Clinical Examination JAMA series

     • Use a JAMA Rational Clinical Examination paper to solve a clinical problem

  14. 14.

    How to appraise a Systematic Review or Meta-analysis

     • Define meta-analysis, systematic review (qualitative and quantitative)

     • Describe the advantages and limitations of systematic reviews and meta-analysis

     • Describe the criteria for internal validity of a systematic review article

     • Define bias and random error, odds ratio, relative risk, relative risk reduction, absolute risk reduction, number needed to treat, confidence intervals, forest plot, effect size

     • Understand the importance of all the previously defined concepts applied to a systematic review article

     • Calculate OR, RR, RRR, ARR and NNT from a published systematic review article

     • Use a systematic review article to solve a clinical problem

     • Understand the concepts of external validity of a systematic review

  15. 15.

    Clinical practice guidelines

     • Define clinical practice guidelines

     • Describe the sequence of developing an evidence-based clinical practice guideline

     • Understand the advantages and limitations of a clinical guideline

     • Describe and understand the internal validity requirements of a clinical guideline article

     • List the available resources for clinical guidelines

     • Use a clinical practice guideline to solve a clinical problem

Outcomes and Instrumentation

The assessed outcomes were attitudes, knowledge and skills related to EBM. Two instruments were used: Taylor’s questionnaire, a published instrument designed to evaluate the effectiveness of evidence-based medicine teaching [29] and a 100 multiple-choice question test developed specifically for this study.

Taylor’s instrument was categorized as a level 1 instrument in a systematic review of tools to evaluate EBM education, since it has reasonable psychometric properties, has been evaluated for validity from at least three sources of evidence, and is recommended for use in the summative evaluation of individual trainees [30]. The instrument includes items to assess critical appraisal skills, use of evidence behaviors, knowledge and attitudes regarding evidence-based clinical practice [29]. The attitude portion of the questionnaire includes statements related to the use of scientific evidence using a Likert scale. Each statement is scored on a five point scale, responses are added to obtain a total attitude score, and the range of scores is 7 to 35. To determine an overall score for the confidence in critical appraisal skills section, six statements were scored using a scale where “Very confident” was assigned a score of 5, “Not at all confident” a score of 1, and “Don’t know” a score of 0. The scores of the six questions were added, providing a global critical appraisal skills confidence score, where 5 indicated “little or no confidence” and 30 indicated “complete confidence”.

The knowledge part of the questionnaire includes six multiple true-false questions, each with three items, using ‘true’, ‘false’ or ‘don’t know’ response categories. Correct responses to the knowledge questions have a score of 1, incorrect responses are negatively scored (−1) to try to prevent guessing, and the ‘don’t know’ response has a score of 0. The knowledge scores were added in an overall knowledge score, with a possible range of −18 to +18. In a previous paper, we translated the questionnaire to Spanish with the author’s permission, and verified it with backtranslation [21].

The other instrument used was the final summative test of the Evidence-Based Medicine Course. This instrument was developed, administered, scored, and analyzed following the 12 steps for effective test development described by Downing [31]. Item analysis was performed on a pilot application of the test with ITEMAN for Windows (Assessment Systems Corporation, St. Paul, MN), and the information obtained was used to improve the instrument for this study, choosing the better-performing items and preserving content validity. The pilot application of the original 140-items EBM test was done in 57 examinees, and had a Cronbach’s alpha of 0.82. Using the item analysis information 100 multiple-choice questions (MCQ) were selected by the test developers for the final version of the instrument.

The instruments were applied to the students on three consecutive weeks. The students had up to three hours to answer the test and the questionnaire, to minimize the risk of a speeded examination. Taylor’s questionnaires data sheets were captured in a Microsoft Excel spreadsheet. Op-scan answer sheets for item analysis were used for the EBM MCQ test.

Statistical analysis

The piloting of the EBM MCQ test provided preliminary data for differences and standard deviation, and sample size calculation was performed for the primary hypothesis of knowledge increase with a power of 0.90 (beta error of 0.10), two-sided alpha error of 0.05. After a thorough review of the published studies that included magnitude of EBM knowledge differences in undergraduate medical students, and careful consideration by the course faculty of the smallest meaningful difference (SMD) in this parameter, it was estimated that a difference of 10 questions between the intervention group and the control group would be reasonable. Using this estimate, about 31 students per group would be necessary to detect an effect size of 0.5 or larger.

SPSS for Windows 15.0 and Instat 3.0 for the Macintosh were used for data analysis. The comparison of the use of evidence items in Taylor’s questionnaire between M5 and M5 non-EBM students was done with the non-parametric Mann–Whitney U test. The attitude and critical appraisal confidence scores measured with Taylor’s instrument were compared among groups using the Kruskal-Wallis with Dunn’s multiple comparison test. The groups’ knowledge test scores with both instruments were compared with one-way analysis of variance, with planned comparisons. A p-value of less than 0.05 was considered statistically significant. Cohen’s d with pooled standard deviations was calculated as a measure of effect size for the critical appraisal skills, attitude and knowledge scores among groups [32]. Item analysis of the EBM Test data was performed with ITEMAN for Windows 3.2, (Assessment Systems Corporation, St. Paul, MN http://www.assess.com).

Ethical aspects

The instruments did not have individual student identifiers, to eliminate the risk of potential harm to the participants. This study was reviewed by the Institutional Review Board of the Office for the Protection of Research Subjects of the University of Illinois at Chicago, and the Research Committee of the Mexican Army Medical School, and was considered to be in the exempt category for individual written informed consent.

Results

Subjects

The flow diagram of the study participants throughout the trial is outlined in Figure 1. A total of 289 medical students were assessed. One student from the M5 non-EBM group was sick on the assessment day. Five subjects in the M4 non-EBM and 7 subjects in the M6 EBM groups didn't participate because they were on clinical duties on the testing day.

The students’ age (mean±SD) per group was: M4= 21.5±1.8, M5 EBM=22.8±2.0, M5 non-EBM=22.4±2.2 and M6=23.5±1.9 years. The groups’ gender composition was similar, with a predominance of women over men (about 60/40).

Use of the evidence

The use of scientific evidence explored in the first section of Taylor’s questionnaire, includes two main questions: “What type of resources do you use to keep up to date?” and “What type of resources do you use to solve a specific health care problem?” The answers by group and type of resource are presented in Figures 2 and 3.

Figure 2
figure 2

Use of evidence to keep up to date. Distribution of answers to the question: "what type of resources do you use to keep up to date?" in the different medical student groups. (M4=4th year students with no evidence-based medicine training; M5 EBM and M5 non-EBM=5th year medical students with and without the evidence-based medicine course; M6=6th year students exposed to the evidence-based medicine course during the year prior to assessment; EBM=Evidence-Based Medicine; ACPJC=American College of Physicians Journal Club). *** = P<0.001 Mann–Whitney U test for the M5 vs. M5 non-EBM comparison.

Figure 3
figure 3

Use of evidence to solve a health problem. Distribution of answers to the question: "what type of resources do you use to solve a specific health problem?" in the different medical student groups. (M4=4th year students with no evidence-based medicine training; M5 EBM and M5 non-EBM=5th year medical students with and without the evidence-based medicine course; M6=6th year students exposed to the evidence-based medicine course during the year prior to assessment; EBM=Evidence-Based Medicine; ACPJC=American College of Physicians Journal Club). * =P<0.01 Mann–Whitney U test for the M5 vs. M5 non-EBM comparison.*** = P<0.001 Mann–Whitney U test for the M5 vs. M5 non-EBM comparison.

In the use of information resources to keep up to date and to solve a specific health care problem, the pattern of responses was the same. The answers were similar among the four student groups regarding the use of review articles, original research journals, textbooks, Internet resources and teachers, but there were statistically significant differences in the use of secondary journals (e.g. American Journal of Physicians Journal Club) and the Cochrane Library. The experimental group (M5 EBM) had a higher reported use of original research articles to solve a specific health problem than the randomized comparison group (M5 non-EBM) (P<0.01). The M5 EBM and M6 groups reported a higher use of secondary journals than the M4 and the M5 non-EBM groups, and a similar pattern of response was found in the use of the Cochrane Library (P<0.001) (Figures 2 and 3).

Confidence in critical appraisal skills

There was a higher confidence level of critical appraisal skills in all items in this section of Taylor’s instrument (assessing study design, evaluating bias, evaluating statistical tests), in the intervention group (P<0.001). The critical appraisal confidence global scores for the different study groups were as follows: M4=11.7±6.3 (mean±SD), M5 non-EBM=8.4±5.7, M5 EBM=17.1±3.6 and M6= 16.8±4.9. The summary data for each group is shown in Figure 4, where the experimental group (M5 EBM) had higher scores than the randomized control group (M5 non-EBM) and the M4 comparison group (P<0.001). The M4 score was slightly higher than the M5 non-EBM group (P<0.05), and the M6 group had higher scores than M4 and M5 non-EBM (P<0.001).

Figure 4
figure 4

Critical appraisal skills scores. Critical appraisal confidence scores in the different groups of medical students, measured with Taylor’s questionnaire. (M4=4th year students with no evidence-based medicine training; M5 EBM and M5 non-EBM=5th year medical students with and without the evidence-based medicine course; M6=6th year students exposed to the evidence-based medicine course during the year prior to assessment; CI=confidence interval).

Attitudes

The EBM attitude scores measured with Taylor’s questionnaire are shown in Figure 5. The scores were similar between the groups that didn’t receive the EBM educational intervention, the M4 group had a score of 24.5±5.2 (mean±SD), and the M5 non-EBM group had 24.0±5.0 (P>0.05). The M5 EBM group had an attitude score of 28.7±2.2, higher than the M4 and M5 non-EBM groups (P<0.001). The M6 students had an attitude score of 26.7±3.6, higher than the control groups and lower than the M5 EBM group (P<0.05). Cohen’s d effect size for the comparison of M5 EBM vs. M5 non-EBM was 1.21 (Table 1).

Figure 5
figure 5

Attitude scores. Attitude scores in the different groups of medical students, measured with Taylor’s questionnaire. (M4=4th year students with no evidence-based medicine training; M5 EBM and M5 non-EBM=5th year medical students with and without the evidence-based medicine course; M6=6th year students exposed to the evidence-based medicine course during the year prior to assessment; CI=confidence interval).

Table 1 Effect size (Cohen’s “ d ”) in critical appraisal confidence, attitude and knowledge scores when comparing the different medical student groups

Knowledge scores with Taylor’s instrument

The results of the knowledge score measured with Taylor’s questionnaire are shown in Figure 6. The scores were similar between non-EBM groups, M4=1.06±3.16 (mean±SD), and M5 non-EBM=1.13±3.27 (P=0.91). The M5 EBM intervention group had a knowledge score of 4.21±3.73, higher than those of M4 and M5 non-EBM. The planned contrast in the main comparison between M5 EBM and M5 non-EBM showed that the intervention group had a higher knowledge score than the randomized control group (P<0.001). The M6 group had a knowledge score of 2.44±3.77, higher than both control groups, (P<0.01), but lower than M5 EBM (P<0.01). The effect size measured with Cohen’s d for the knowledge score main comparison of M5 EBM vs. M5 non-EBM was 0.88 (Table 1).

Figure 6
figure 6

Knowledge scores with Taylor’s instrument. Knowledge scores in the different groups of medical students, measured with Taylor’s questionnaire. (M4=4th year students with no evidence-based medicine training; M5 EBM and M5 non-EBM=5th year medical students with and without the evidence-based medicine course; M6=6th year students exposed to the evidence-based medicine course during the year prior to assessment; CI=confidence interval).

Knowledge scores with EBM summative MCQ test

The results of the 100-item MCQ EBM knowledge test are presented as percent-correct scores (Figure 7). The reliability of the test with Cronbach’s alpha was 0.72 in the M5 EBM group, and 0.83 in the M6 group. The scores were similar between non-EBM groups, M4=30.6±5.6 (mean±SD), and M5 non-EBM=32.6±6.6 (P=0.18). The M5 EBM group had a test score of 58.5±7.9, higher than M4 and M5 non-EBM. The planned contrast between M5 EBM and M5 non-EBM found that the educational intervention group had a higher knowledge score (P<0.001). M6 had a knowledge score of 41.0±10.9, higher than the control groups (P<0.001), but lower than M5 EBM (P<0.001). The effect size with Cohen’s d for the knowledge score main outcome comparison of M5 EBM vs. M5 non-EBM was 3.54 (Table 1).

Figure 7
figure 7

Knowledge scores EBM test. Knowledge scores in the different groups of medical students, measured with the 100 multiple-choice questions EBM test. (M4=4th year students with no evidence-based medicine training; M5 EBM and M5 non-EBM=5th year medical students with and without the evidence-based medicine course; M6=6th year students exposed to the evidence-based medicine course during the year prior to assessment; CI=confidence interval).

Discussion

This research study presents experimental evidence that an EBM educational intervention in medical students increases attitudes, knowledge and self-reported critical appraisal skills, in the setting of a developing country medical school.

The research design was a parallel-group randomized control trial, with a quasi-experimental static-groups comparison, to take advantage of a strong study design and its implications in terms of internal validity and the causal inferences that can be made of the results [24, 25, 33]. Recent studies and systematic reviews suggest that well-planned and educationally sound EBM interventions can have a reasonable impact on the abilities of the individuals that undergo these educational experiences [9, 14, 34].

There are not many published randomized controlled trials that study the impact of EBM education and very few from developing countries [912, 14]. Some of the randomized trials did not find an effect of EBM educational interventions, which point to the need of continuing research in this area [3537].

In the present study the educational intervention was one semester long, it was mandatory, and had a summative test, all these factors probably contribute to the magnitude of the findings in the randomized comparison. Almost all published studies have used only one assessment instrument, while our study used two evaluation tools, a published questionnaire with validity evidence designed to measure the effectiveness of evidence-based practice teaching, and an ad hoc objective test developed for the course summative assessment [29, 30]. This characteristic of our study design provided an opportunity to concurrently validate an already published instrument and a new objective test developed specifically for our course, contributing to the body of literature supporting the validity of Taylor’s instrument.

We found an increase in critical appraisal skills, and in the positive attitude to evidence-based practice. These findings are similar to Ghali et al. [16], with a higher reported use of secondary journals and Cochrane Library systematic reviews. It is important to recognize that these are self-reports, the actual behaviour of the students in the use of these resources in their daily routines wasn’t directly measured.

In our study the answers to two questions related to the use of evidence (to keep up to date and to solve clinical problems) had a similar pattern of responses to our previous paper, as measured with Taylor’s questionnaire [21]. There was a higher reported use of the Cochrane Library and secondary journals in both items in the M5 intervention group, and a higher use of original research papers to solve a healthcare problem. It is apparent that all the students use frequently textbooks, Internet resources, teachers and residents as sources of information in health care, as previously reported [21]. These resources are readily available, and culturally accepted in the daily practice of medicine.

The use of the Cochrane Library and secondary journals was higher in our intervention group, which suggests that these resources were virtually unknown to the students before the course and that its reported use probably increased as a result of the educational intervention. Even though these EBM information resources have been extensively used in developed countries in the last decades, developing countries have been slower in adopting them as formal information elements, probably because of a lack of availability and misunderstanding of their potential use [38, 39]. The Cochrane Library has been translated to Spanish by the Iberoamerican Cochrane Network, as the Cochrane Library Plus (http://cochrane.bvsalud.org), which should improve the availability and use of this resource in Spanish-speaking countries.

This study found that the EBM intervention improved the confidence of medical students regarding several aspects of critical appraisal skills, as well as statistical concepts relevant to the correct interpretation of published research findings. An interesting aspect of these results is that the medical students who weren’t exposed to the EBM course (M4 and M5 non-EBM), already had courses on Biostatistics and Scientific Methodology and nonetheless had lower scores in this outcome. Probably those courses didn’t have a substantial impact or it was short-lived. Other explanations could be that the previous courses on related subjects were given by non-clinicians and/or basic research scientists with no clinical orientation, having a minor effect on the EBM outcomes. The increase in critical appraisal skills is in agreement with several published reports of EBM in undergraduate students [15, 16]. Other studies haven’t found a significant improvement in critical appraisal skills, probably due to several factors inherent to the complexity of educational research interventions in healthcare settings [3537]. In our study the effect size immediately after the course in critical appraisal skills score was higher than 1.0, which can be interpreted as large using Cohen's classification [32]. A similar effect size was found when comparing the students that had the EBM course six months to one year before with the control group (Table 1).

It is important to recognize that self-perceived skills can overestimate true competence and performance, so these findings may not reflect the real critical appraisal and statistics skills of the medical students, although confidence in a skill is an important component of the performance spectrum [40, 41].

The overall attitude score findings in our study are congruent with several published papers, showing an increase immediately after the course of about 17-20% [16, 21, 23, 42]. The 6th year students attitude score was higher than the control group and the 4th year students, which suggests that the attitude change can still occur from six months to a year after the course. Our previous study found very similar attitude score values measured with the same instrument, which adds reproducibility evidence to the use of Taylor’s instrument for measurement of EBM attitude in our population of students [21]. It is noteworthy that some studies, including randomized controlled trials of EBM teaching, didn’t find a change in attitudes, probably due to the shorter duration of the workshops and related activities [36, 37].

A major challenge of assessing EBM teaching is to demonstrate an increase in the “knowledge” of evidence-based clinical practice, since several disciplines intersect in the optimal use of scientific evidence (research methodology, biomedical informatics, biostatistics, clinical epidemiology) which integrate a large body of knowledge and facts. In this investigation, large effect sizes in the main randomized comparison (M5 vs. M5-nonEBM) were found in the EBM knowledge scores measured with Taylor’s questionnaire and the EBM MCQ test. The knowledge increase after the course was about 73% higher than the control group when measured with Taylor’s instrument, and 25.9% when measured with the EBM test. These increases can be interpreted as large when expressed as effect sizes using Cohen's classification, 0.88 and 3.54 respectively [32]. The fact that the changes were apparent when measured with two different instruments, adds validity evidence to the conclusion that the EBM course significantly improved the students’ knowledge base about EBM and its related concepts.

The EBM knowledge level was similar in the M4 and M5 non-EBM groups, which strongly suggests that the amount of EBM knowledge without a specific educational intervention is minimal even in the senior years of our medical school, and that there was no maturation threat to internal validity.

The significantly lower EBM knowledge scores in 6th year students, in the time period of six months to a year after a similar intervention, suggests the possibility of knowledge decay, with decreasing amount of knowledge as time passes, unless continuous learning and practice occurs [43]. This difference in knowledge could be explained by the fact that our 6th year measure was done in a different group of students, not the randomized 5th year class, so it may not represent a true measure of knowledge decay but a difference in students' ability and it is uncertain how this would impact their use of EBM in clinical practice.

Other published randomized controlled trials of EBM educational intervention have produced conflicting results regarding knowledge change, with some of them showing minimal or no differences after the intervention [3537] whereas others have found knowledge score increases of 36 to 58% [42, 44]. These differences are probably due to the different nature of the educational interventions, their duration and the educational context (e.g. mandatory course). The use of effect size indices like Cohen’s d in EBM educational research publications could help visualize in a more standardized fashion the magnitude of the differences among studies, and promote reflection about the potential educational significance of the findings [45, 46].

A limitation of the study is that it does not measure the actual competence and performance of EBM-related skills in a real clinical setting. Another potential limitation is related to the generalizability of the study, since the medical school has some particular characteristics because of its military nature, which could limit extrapolation to other medical schools. As with any implementation of a new course in a medical school, there was an intense interest from the course instructors to develop and implement as effective an educational intervention as possible, so there could be a tendency for confirmation bias. This can be expected in an education experimental study, where it is not possible to blind either the instructors or the students to the educational intervention. The data analysis was blinded in an attempt to decrease this bias. Another possible source of bias could be the Hawthorne effect, since students in the randomized intervention group were aware that they were being assessed on the course effectiveness, differently from the students that had the regular course previously [25].

Conclusions

Our study has implications for the design, implementation and assessment of EBM educational interventions in developing countries. Firstly, it shows that EBM courses can be successfully implemented and embedded in a medical school’s curriculum. Secondly, it provides evidence that the course can improve knowledge, attitudes, critical appraisal confidence, and self-reported skills and behaviours about EBM and its related concepts, although the amount of knowledge that changes with time is still uncertain. And thirdly, it attests to the fact that using international test development standards can contribute to the development of a reliable instrument with evidence of construct validity for the measurement of EBM knowledge acquisition. The study findings contributed to the quality improvement process in the medical school, and provided data to be used in the planning and implementation of subsequent EBM courses. Educational planning will address its clinical links and vertical/horizontal integration with the rest of the curriculum (explicit and hidden), and more studies with rigorous follow-up should be undertaken to identify EBM competencies retention in the long-term. Published models and recommendations to increase the depth and duration of EBM learning should be taken into account when initiating educational interventions of this nature [47, 48].

References

  1. Straus SE, Glasziou P, Richardson WS, Haynes RB: Evidence-Based Medicine. How to Practice and Teach it. 2011, Philadelphia, PA: Elsevier Churchill Livingstone, 4

    Google Scholar 

  2. Institute of Medicine (US): Health Professions Education: A Bridge to Quality. 2003, Washington (DC): National Academies Press

    Google Scholar 

  3. Whitcomb ME: Why we must teach evidence-based medicine. Acad Med. 2005, 80: 1-2. 10.1097/00001888-200501000-00001.

    Article  Google Scholar 

  4. Dawes M, Summerskill W, Glasziou P, Cartabellotta A, Martin J, Hopayian K, Porzsolt F, Burls A, Osborne J: Second International Conference of Evidence-Based Health Care Teachers and Developers. Sicily statement on evidence-based practice. BMC Med Educ. 2005, 5 (1): 1-10.1186/1472-6920-5-1. http://www.biomedcentral.com/1472-6920/5/1.

    Article  Google Scholar 

  5. Report from the World Federation on Medical Education: WFME Task Force on Defining International standards in basic medical education. Med Educ. 2000, 34: 665-75.

    Article  Google Scholar 

  6. Dobbie AE, Schneider FD, Anderson AD, Littlefield J: What evidence supports teaching evidence-based medicine?. Acad Med. 2000, 75: 1184-5. 10.1097/00001888-200012000-00010.

    Article  Google Scholar 

  7. Norman GR, Shannon SI: Effectiveness of instruction in critical appraisal (evidence-based medicine) skills: a critical appraisal. CMAJ. 1998, 158: 177-81.

    Google Scholar 

  8. Taylor R, Reeves B, Ewings P, Binns S, Keast J, Mears R: A systematic review of the effectiveness of critical appraisal skills training for clinicians. Med Educ. 2000, 34: 120-125. 10.1046/j.1365-2923.2000.00574.x.

    Article  Google Scholar 

  9. Horsley T, Hyde C, Santesso N, Parkes J, Milne R, Stewart R: Teaching critical appraisal skills in healthcare settings. Cochrane Database of Systematic Reviews. The Cochrane Library, Issue 05, Art. No. CD001270.

  10. Coomarasamy A, Taylor R, Khan KS: A systematic review of postgraduate teaching in evidence-based medicine and critical appraisal. Med Teach. 2003, 25: 77-81. 10.1080/0142159021000061468.

    Article  Google Scholar 

  11. Coomarasamy A, Khan KS: What is the evidence that postgraduate teaching in evidence based medicine changes anything? A systematic review. BMJ. 2004, 329: 1017-1021. 10.1136/bmj.329.7473.1017.

    Article  Google Scholar 

  12. Green ML: Graduate medical education training in clinical epidemiology, critical appraisal, and evidence-based medicine: a critical review of curricula. Acad Med. 1999, 74: 686-94. 10.1097/00001888-199906000-00017.

    Article  Google Scholar 

  13. Hatala R, Guyatt G: Evaluating the teaching of Evidence-based Medicine. JAMA. 2002, 288: 1110-2. 10.1001/jama.288.9.1110.

    Article  Google Scholar 

  14. Flores-Mateo G, Argimon JM: Evidence based practice in postgraduate healthcare education: a systematic review. BMC Health Serv Res. 2007, 7: 119-10.1186/1472-6963-7-119.

    Article  Google Scholar 

  15. Pitkäla K, Mäntyranta T, Strandberg TE, Mäkelä M, Vanhanen H, Varonen H: Evidence-based medicine – how to teach critical scientific thinking to medical undergraduates. Med Teach. 2000, 22: 22-6. 10.1080/01421590078760.

    Article  Google Scholar 

  16. Ghali WA, Saitz R, Eskew AH, Gupta M, Quan H, Hershman WY: Successful teaching in evidence-based medicine. Med Educ. 2000, 34: 18-22. 10.1046/j.1365-2923.2000.00402.x.

    Article  Google Scholar 

  17. Burrows S, Moore K, Arriaga J, Paulaitis G, Lemkau HL: Developing an “Evidence-Based Medicine and Use of the Biomedical Literature” component as a longitudinal theme of an outcomes-based medical school curriculum: year 1. J Med Libr Assoc. 2003, 91: 34-41.

    Google Scholar 

  18. Barnett SH, Kaiser S, Morgan LK, Sullivant J, Siu A, Rose D: An integrated program for Evidence-based Medicine in medical school. Mt Sinai J Med. 2000, 67: 163-8.

    Google Scholar 

  19. Srinivasan M, Weiner M, Breitfled PP, Brahmi F, Dickerson KL, Weiner G: Early introduction of an evidence-based medicine course to preclinical medical students. J Gen Intern Med. 2002, 17: 58-65. 10.1046/j.1525-1497.2002.10121.x.

    Article  Google Scholar 

  20. Elessi K, Mokhallalati M, Madbak S: Evidence-based medicine comes to Gaza. Lancet. 2011, 378 (9806): 1834-5. 10.1016/S0140-6736(11)61513-4.

    Article  Google Scholar 

  21. Sánchez-Mendiola M: Evidence-based medicine teaching in the Mexican Army Medical School. Med Teach. 2004, 26 (7): 661-663. 10.1080/01421590412331282309.

    Article  Google Scholar 

  22. Sánchez-Mendiola M: La Medicina Basada en Evidencias y la Escuela de Medicina. Gac Med Mex. 2004, 140: 314-316.

    Google Scholar 

  23. Liabsuetrakul T, Suntharasaj T, Tangtrakulwanich B, Uakritdathikarn T, Pornsawat P: Longitudinal analysis of integrating evidence-based medicine into a medical student curriculum. Fam Med. 2009, 41 (8): 585-8.

    Google Scholar 

  24. Kang M, Ragan B, Park J: Issues in outcomes research: an overview of randomization techniques for clinical trials. J Athl Train. 2008, 43 (2): 215-221. 10.4085/1062-6050-43.2.215.

    Article  Google Scholar 

  25. Fraenkel JR, Wallen NE: How to Design and Evaluate Research in Education. 2003, New York: McGraw-Hill, 5

    Google Scholar 

  26. Harris AD, McGregor JC, Perencevich EN, Furuno JP, Zhu J, Peterson DE, Finkelstein J: The use and interpretation of quasi-experimental studies in medical informatics. J Am Med Inform Assoc. 2006, 13: 16-23.

    Article  Google Scholar 

  27. Howe A, Keogh-Brown M, Miles S, Bachmann M: Expert consensus on contamination in educational trials elicited by a Delphi exercise. Med Educ. 2007, 41 (2): 196-204. 10.1111/j.1365-2929.2006.02674.x.

    Article  Google Scholar 

  28. Lifshitz A, Sánchez-Mendiola M: Medicina Basada en Evidencias. 2002, México: McGraw-Hill Interamericana

    Google Scholar 

  29. Taylor R, Reeves B, Mears R, Keast J, Binns S, Ewings P, Khan K: Development and validation of a questionnaire to evaluate the effectiveness of evidence-based practice teaching. Med Educ. 2001, 35: 544-7. 10.1046/j.1365-2923.2001.00916.x.

    Article  Google Scholar 

  30. Shaneyfelt T, Baum KD, Bell D, Feldstein D, Houston TK, Kaatz S, Whelan C, Green M: Instruments for evaluating education in evidence-based practice. A systematic review. JAMA. 2006, 296: 1116-1127. 10.1001/jama.296.9.1116.

    Article  Google Scholar 

  31. Downing SM: Twelve steps for effective test development. Handbook of Test Development. Edited by: Downing SM, Haladyna TM. 2006, Mahwah, N.J: Lawrence Erlbaum Associates, 3-25.

    Google Scholar 

  32. Cohen J: Statistical power analysis for the behavioral sciences. 1988, Hillsdale, NJ: Lawrence Earlbaum Associates, 2n

    Google Scholar 

  33. Cook DA: Randomized controlled trials and meta-analysis in medical education: what role do they play?. Med Teach. 2012, 34 (6): 468-73. 10.3109/0142159X.2012.671978.

    Article  Google Scholar 

  34. Khan KS, Coomarasamy A: A hierarchy of effective teaching and learning to acquire competence in evidence-based medicine. BMC Med Educ. 2006, 6: 59-10.1186/1472-6920-6-59. Available in: http://www.biomedcentral.com/1472-6920/6/59

    Article  Google Scholar 

  35. Feldstein DA, Maenner MJ, Srisurichan R, Roach MA, Vogelman BS: Evidence-based medicine training during residency: a randomized controlled trial of efficacy. BMC Med Educ. 2010, 10: 59-10.1186/1472-6920-10-59. http://www.biomedcentral.com/1472-6920/10/59.

    Article  Google Scholar 

  36. Taylor RS, Reeves BC, Ewings PE, Taylor RJ: Critical appraisal skills training for health care professionals: a randomized controlled trial. BMC Med Educ. 2004, 4 (1): 30-10.1186/1472-6920-4-30. http://www.biomedcentral.com/1472-6920/4/30.

    Article  Google Scholar 

  37. Bradley P, Oterholt C, Herrin J, Nordheim L, Bjørndal A: Comparison of directed and self-directed learning in evidence-based medicine: a randomized controlled trial. Med Educ. 2005, 39 (10): 1027-35. 10.1111/j.1365-2929.2005.02268.x.

    Article  Google Scholar 

  38. Letelier LM, Zamarin N, Andrade M, Gabrielli L, Caiozzi G, Viviani P, Riquelme A: Exploring language barriers to Evidence-based Health Care (EBHC) in post-graduate medical students: a randomised trial. Educ Health (Abingdon). 2007, 20 (3): 82.

    Google Scholar 

  39. Tomatis C, Taramona C, Rizo-Patrón E, Hernández F, Rodríguez P, Piscoya A, Gonzales E, Gotuzzo E, Heudebert G, Centor RM, Estrada CA: Evidence-based medicine training in a resource-poor country, the importance of leveraging personal and institutional relationships. J Eval Clin Pract. 2011, 17 (4): 644-50. 10.1111/j.1365-2753.2011.01635.x.

    Article  Google Scholar 

  40. Davis DA, Mazmanian PE, Fordis M, Van Harrison R, Thorpe KE, Perrier L: Accuracy of physician self-assessment compared with observed measures of competence. A systematic review. JAMA. 2006, 296: 1094-1102. 10.1001/jama.296.9.1094.

    Article  Google Scholar 

  41. Lai NM, Teng CL: Self-perceived competence correlates poorly with objectively measured competence in evidence based medicine among medical students. BMC Med Educ. 2011, 11: 25-10.1186/1472-6920-11-25. http://www.biomedcentral.com/1472-6920/11/25.

    Article  Google Scholar 

  42. Cheng HM, Guo FR, Hsu TF, Chuang SY, Yen HT, Lee FY, Yang YY, Chen TL, Lee WS, Chuang CL, Chen CH, Ho T: Two strategies to intensify evidence-based medicine education of undergraduate students: a randomised controlled trial. Ann Acad Med Singapore. 2012, 41 (1): 4-11.

    Google Scholar 

  43. Rubin DC, Wenzel AE: One hundred years of forgetting: a quantitative description of retention. Psychol Rev. 1996, 103: 734-760.

    Article  Google Scholar 

  44. Krueger PM: Teaching critical appraisal: a pilot randomized controlled outcomes trial in undergraduate osteopathic medical education. J Am Osteopath Assoc. 2006, 106 (11): 658-62.

    Google Scholar 

  45. Sullivan GM, Feinn R: Using effect size - or why the P value is not enough. J Grad Med Educ. 2012, 4 (3): 279-82.

    Article  Google Scholar 

  46. Norman G: The effectiveness and effects of effect sizes. Adv Health Sci Educ Theory Pract. 2003, 8 (3): 183-7.

    Article  Google Scholar 

  47. West CP, Jaeger TM, McDonald FS: Extended evaluation of a longitudinal medical school evidence-based medicine curriculum. J Gen Intern Med. 2011, 26 (6): 611-5. 10.1007/s11606-011-1642-8.

    Article  Google Scholar 

  48. Hatala R, Keitz SA, Wilson MC, Guyatt G: Beyond journal clubs. Moving toward an integrated evidence-based medicine curriculum. J Gen Intern Med. 2006, 21: 538-541. 10.1111/j.1525-1497.2006.00445.x.

    Article  Google Scholar 

Pre-publication history

Download references

Acknowledgements

We thank all the faculty and students that participated in the implementation and evaluation of the EBM course, and Rocío García-Durán, Joel Villamar-Chulín and Fernando Flores-Hernández for data processing, analysis, and graphical design.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Melchor Sánchez-Mendiola.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

MS, LK and SM planned, designed and implemented the EBM course and the summative test, and applied the assessment instruments. MS, SD and AS participated in the design of the study and the statistical analysis. MS drafted the initial version of the manuscript. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Sánchez-Mendiola, M., Kieffer-Escobar, L.F., Marín-Beltrán, S. et al. Teaching of evidence-based medicine to medical students in Mexico: a randomized controlled trial. BMC Med Educ 12, 107 (2012). https://doi.org/10.1186/1472-6920-12-107

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1472-6920-12-107

Keywords