Skip to main content

Evidence based practice in postgraduate healthcare education: A systematic review

Abstract

Background

Training in Evidence-Based Practice (EBP) has been widely implemented throughout medical school and residency curricula. The aim of this study is to systematically review studies that assessed the effectiveness of EBP teaching to improve knowledge, skills, attitudes and behavior of postgraduate healthcare workers, and to describe instruments available to evaluate EBP teaching.

Methods

The design is a systematic review of randomized, non-randomized, and before-after studies. The data sources were MEDLINE, Cochrane Library, EMBASE, CINAHL and ERIC between 1966 and 2006. Main outcomes were knowledge, skills, attitudes and behavior towards EBP. Standardized effect sizes (E-S) were calculated. The E-S was categorized as small (E-S < 0.2), small to moderate (E-S between 0.2 and 0.5), moderate to large (E-S between 0.51 and 0.79), large (E-S > 0.79). Reliability and validity of instruments for evaluating education were assessed. Studies excluded were those that were not original, performed in medical students, focused on prescribing practices, specific health problems, theoretical reviews of different components of EBP, continuing medical education, and testing the effectiveness of implementing guidelines.

Results

Twenty-four studies met our inclusion criteria. There were 15 outcomes within the 10 studies for which E-S could be calculated. The E-S ranged from 0.27 (95%CI: -0.05 to 0.59) to 1.32 (95%CI: 1.11 to 1.53). Studies assessing skills, behavior and/or attitudes had a "small to moderate" E-S. Only 1 of the 2 studies assessing knowledge had E-S of 0.57 (95 CI: 0.32 to 0.82) and 2 of the 4 studies that assessed total test score outcomes had "large" E-S. There were 22 instruments used, but only 10 had 2 or more types of validity or reliability evidence.

Conclusion

Small improvements in knowledge, skills, attitudes or behavior are noted when measured alone. A large improvement in skills and knowledge in EBP is noted when measured together in a total test score. Very few studies used validated measures tests.

Background

One of the most consistent findings in health-service research is the gap between best practice (as determined by scientific evidence) on the one hand and actual clinical care on the other [1, 2]. Over the past decade, evidence-based clinical guidelines have become a major feature of healthcare provision. Biomedical researchers in many countries have established programs to garner evidence in the diagnosis and treatment of health problems, and to disseminate these guidelines in order to improve the quality of care provision. However, several studies have suggested that clinical use of these guidelines does not occur, that between 10 and 40% of patients do not receive care based on current scientific evidence, and that ≥ 20% of care provided is not needed or is potentially harmful to the patients [1, 3–5].

A strategy to reduce these deficits in care provision is to increase the number of Evidence Based Practice (EBP) training programs [6–8]; their goal being to improve outcomes for patients by increasing postgraduate health care knowledge, skills and attitudes towards EBP [9]. However, published reports on effectiveness of these training schemes have shown conflicting results [10–13].

A crucial aspect in evaluating education programs is the choice of instrument for evaluating the effect of the educational training [14]. The rigor with which investigators and educators construct and/or administer the instrument could affect the reliability, validity and feasibility of the evaluation [14, 15]. As such, a systematic and comprehensive review of existing instruments is necessary so as to describe the relationships between different educational instruments and the effectiveness of an EBP course in increasing knowledge, skills, attitudes and behavior in EBP and, as such, to be able to select the instrument that best assesses effectiveness of EBP training.

Hence, the purpose of this present study was to perform a systematic review of the studies that had assessed the effectiveness of teaching EBP whose objectives were to improve knowledge, critical appraisal skills, attitudes and behavior of postgraduate healthcare workers. We examined, as well, the measures used to evaluate the effectiveness of the intervention, together with their reliability and validity.

Methods

Search strategy and study selection

We searched: (1) MEDLINE, (2) Cocharane Library, (3) EMBASE, (4) the Cumulative Index of Nursing and Allied Health Literature (CINAHL®) and ERIC. We designed a search strategy for MEDLINE, accessed via PubMed, for studies investigating the effectiveness of EBP training in clinical practice by using free text and the Medical Subject Headings (MeSH) terms evidence based medicine, evidence based health care, evidence based practice, critical appraisal, knowledge, attitude, skills, behavior, clinical competence, teach, education intervention, courses, journal club, workshops, multifaceted intervention, residents, physicians, nurses, health care professionals, postgraduates. The literature search period covered January 1966 through December 2006, with no language restrictions. Also, we reviewed the reference lists of the relevant original papers and reviews.

We aimed to identify all the randomized, non-randomized and before-and-after comparison studies that assessed the effectiveness of teaching EBP designed to improve knowledge, skills, attitudes and behavior in postgraduate healthcare workers. Our exclusion criteria were studies that focused on (a) prescribing; (b) specific health problems; (c) theoretical reviews of different components of EBP (searching skills, formulating questions); (d) continuing medical education in general (not specifically in EBP); (e) undergraduates; (f) testing the effectiveness of implementing guidelines; (g) evaluating teaching methods using IT devices (PDA or computer-based reminder); (h) no original studies; and (i) medical students. When several papers were published from the same population, the publication with the longest follow-up was preferred.

Data abstraction

Two investigators (G.F-M., J.M.A) independently abstracted the articles that met the selection criteria. Discrepancies were resolved by consensus. We reviewed each article that met the selection criteria and abstracted the data by using standardized data abstraction forms. Data abstracted were author, year of publication, country, design, participants (discipline and level), sample size, outcome, EBP intervention, duration and frequency of intervention, instruments for evaluating education, feasibility, and the types of reliability and validity assessed.

Feasibility was defined as documentation of some measure of ease of implementation of the questionnaire; time required to administer instrumentation, time required to score instrumentation, and the costs involved in administration and scoring. Reliability is concerned with that portion of measurement that is due to permanent effects which persist from sample to sample. Two broad types of reliability were abstracted: test-retest score or temporal stability and internal consistency. Types of validity assessed were those based on: content, internal structure (internal consistency and dimensionality), and relationships with other variables (responsive and discriminative criteria).

We assigned the types of outcome to the following categories: knowledge of EBP, skills defined as the participant applying knowledge by performing EBP steps in some scenarios, attitudes towards EBP, behavior defined as actual performance of EBP in practice. When two or more outcomes were combined in a score, we described this as a total test score.

We used the recommended questions for appraising reports of medical education interventions to assess study quality [14].

Data synthesis

For those outcomes in which it was possible, we calculated an effect-size (E-S) for each outcome category, and are measures of the magnitude of an intervention effect [16]. The E-S is the difference in means divided by the square root of the pooled-group variances. Unless common metric units were used, this provides different units for outcomes measured. Converting the effect of the different studies to E-S enables comparisons to be made between studies. E-S calculations were made using the effect size generator software program [17]. The E-S was defined as "small" (E-S < 0.2), "small to moderate" (E-S between 0.2 and 0.5), "moderate to large" (E-S between 0.51 and 0.79), "large" (E-S > 0.79). We could not use meta-analysis of E-S for several reasons: (a) because the heterogeneity and diversity of outcomes reported do not allow for a clear metric scale to be used across the studies; (b) important information necessary for pooling studies (such as variance estimates) was missing in many studies; and (c) the diversity of studies (including populations, interventions and follow-up time) was not amenable to pooling.

We assessed publication bias by using the Begg and the Egger tests test and funnel plots, which graphically display the magnitude of the effect estimated as the inverse of variance of the study. All statistical analyses were conducted by using Stata software version 9.0 (STATA Corp, College Station, TX) and with S-PLUS version 7 (Insightful Corporation, Seattle, WA).

Results

Study characteristics

We identified 481 published articles based on our search criteria. Following a review of the abstracts, we retrieved the full text of 29 and assessed them for information on effectiveness of EBP training in postgraduate healthcare workers. After applying the full review, 24 reports were finally included in the current evaluation (Figure 1).

Figure 1
figure 1

Study selection process.

The studies were published between 1988 and 2006 (see Additional file 1). There were 11 randomized controlled trials (RCT) [18–28], 5 non-randomized clinical trials (NRCT) [29–33] and 8 before-after studies [34–41]. Studies were geographically heterogeneous, and sample sizes varied considerably (between 12 and 800 subjects). In most of the studies the population was residents in medicine. Teaching methods included workshops, multifaceted intervention, internet-based intervention or journal club. The journal club was the most common format [18, 21, 30, 31, 35]. The duration of the teaching schedules ranged from 15 minutes to several years.

Both the Begg and the Egger tests were significant (p < 0.05) and the funnel plot did not suggest any publication or related bias (Figure 2).

Figure 2
figure 2

Funnel-plot of the effect size of size of randomized-controlled trials, non randomized-controlled trials and before-after studies.

Characteristics of EBP evaluation instruments

We found 22 instruments for evaluating education in EBP with two instruments being used in more than one study [33, 36, 38, 40]. Feasibility of implementation was poorly reported for all instruments. None reported the time required for administering or scoring the instruments and none estimated the financial cost of implementation. Ten instruments (45.4%) were validated with at least two or more types of evidence of validity or reliability. The responsive validity was the one most commonly tested (90.9%) followed by discriminative (27.3%) and content validity (27.3%) (Table 1)

Table 1 Psychometric characteristics of educational instruments

Assessment of Outcomes

There were 15 outcomes within the 10 studies for which E-S could be calculated [18, 20, 26, 30, 31, 33, 34, 36–38] (Figure 2). Of these, 4 had a non-significant E-S [20, 30, 31, 33]. The E-S ranged form 0.27 (95%CI: -0.0 to 0.59) for attitudes outcome [20] to 1.32 (95%CI: 1.11 to 1.53) for total test score [36] (Figure 2).

Within the outcomes groups, 2 of the total test score outcomes had a significant E-S > 0.79. Five studies assessed skills, two assessed behavior and two assessed attitudes. All had a "small to moderate" E-S (range: 0.2 to 0.5). One of the two studies which assessed knowledge had E-S of 0.57 (95%CI: 0.32 to 0.82) [20] defined as "moderate to large", while the other study had a "small to moderate" E-S [30]. None of the knowledge, skills, attitudes and behavior outcomes had E-S > 0.79 (Figure 3).

Figure 3
figure 3

Forest Plot of the effect size (E-S) of randomized-controlled trials, non randomized-controlled trials and before-after studies. E-S corresponds to magnitude of an intervention effect. Boxes are the E-S estimates from each study. The horizontal bars are 95%CI. The size of the box is proportional to the weight of the study. The studies are sorted by weight in the plot. The table on the right side of the graph indicates whether two or more types of validity were used and what kind of intervention was used: Journal Club or workshops, or multifaceted intervention, or other interventions. â–  Yes; â–¡ No.

We found that of the different types of intervention, the workshop was the most frequent intervention (35.3%), followed by multifaceted intervention (29.6%) (see Additional file 1 and Figure 2).

Quality assessment

We used an adaptation of the quality measure from Reed et al. [14]. We examined 13 criteria of study quality (see Additional file 2). On average, the studies met more than half of the quality criteria. Only two studies met the criteria "Are long term effects assessed?"[38, 42] and only one study did not meet the criteria "Is validity of instruments reported?

Discussion

This review sought to identify those studies that examined the effectiveness of EBP education in improving knowledge, skills, behavior and attitudes in EBP in postgraduate health care. This is important from the medical education standpoint with intervention as a means of improving the quality of care provision. In our review we identified a small significant improvement in skills, knowledge, behavior and attitudes after EBP intervention. We identified a large improvement (E-S > 0.79) in EBP when measured as a total test score. Two of the four studies [36, 38] included that had measured total test score had shown an E-S of up to 0.79. Both studies had used a validation test with high reliability and validity in assessing knowledge and skill in all the usual domains of evidence based medicine by asking focused questions, by searching for good answers, by critiquing literature, and by applying the conclusions to practice[36, 43].

However, the poor quality of these studies precludes conclusions being made on the E-S of improving knowledge, skills, attitudes or behavior following EBP education. Of the 21 studies, 7 were before-after studies and did not employ a non-intervention group for comparison. The majority of the studies had small sample size; median of 59 participants (range, 12–800). Many studies provide little detail on how the questionnaires were developed and validated, how the questionnaires were administrated and how long before the intervention. All the studies were conducted in North America, the United Kingdom, Australia, Germany and Hong Kong, and do not accurately reflect developing countries. Only two studies were designed to assess long-term effect on skills[26, 38] while the rest of the studies assessed short-term learning. The studies in this review were not able to distinguish whether the observed outcomes were the result of receiving the intervention or the desire of the health care professional to change. Integrating theories of behavior change into education programs is one of the keys for successful education development. Sustained learner behavior and change in attitude of individuals with high motivation to learn were more active in the education programs[44].

Our results are consistent with a previous systematic study [11] which found small changes in knowledge at the level of the resident but, in contrast, this improvement was high in undergraduate medical students. And another systematic review [10] showed that standalone teaching improved knowledge but not skills, attitudes or behavior. Finally, a systematic review of the effectiveness of critical appraisal skills in the training of clinicians showed an overall improvement of 68% in assessed outcomes following intervention, but only one study used a randomized controlled design and the methodological quality of the studies included was poor [13].

This review focused as well on examining which studies had used a validation instrument to assess the effectiveness of the intervention. Changes in health cares' knowledge and skills are relatively easy to detect with validation instruments, but changes in behavior and attitudes are more difficult to measure. Several authors have proposed assessment in the practice setting, or by conducting qualitative studies [37, 45]. None of the studies reported health care outcomes and none of had documented any measure of ease-of-implementation, time required to administer the instrument, time required to score the instrument or the costs of administering and of scoring. Only 9 of the 19 instruments (47.4%) revised 2 or more types of validity or reliability. Choice of measurement method is a crucial step in the evaluation of educational interventions because many evaluation methods are not sensitive enough to measure the effectiveness of the interventions, and which could lead to incorrect interpretation of results [14]. Also, the use of validated tests enable comparison of results to be made between different studies[14, 46]. This is an important area for further research, and one in which healthcare research workers need to document the reliability and validity of existing measures, rather than to continue developing new instruments for each new project.

As with our present review, but with a smaller number of studies reviewed, only one other systematic review of EBP teaching had addressed the effectiveness of educational interventions and had included detailed analysis of the evaluation instrument [12]. Another systematic review assessed the available EBP teaching instrument methods but did not report on the effectiveness of EBP teaching [15]. The results of our systematic review confirm the findings of previous assessments indicating that few types of validity and of reliability evidence are contained in the instruments evaluating education in EBP.

There are several limitations in this current review. The eligibility of the studies in our systematic evaluation was limited to published reports. Our resources did not permit an extensive search of the literature outside of the stated databases. However, a study has shown that results of reviews incorporating non-catalogued literature do not differ substantially from those reviews that do contain them [47], and no significant publication bias was found in our analyses. One of the strengths of the present systematic review is the use of the effect-size; the goal being to obtain a standardized outcome measure which would enable comparisons to be made of the results from different studies.

The results of this review provide an outline of common themes for future research: (a) randomized controlled studies with appropriate study sample size and using validated tests are warranted in assessing the effectiveness of EBP training; (b) developing and trans-culturally adapted instruments with strong evidence of validity and reliability and whose evaluation domains correspond to assessing knowledge, skills, attitudes and behavior in EPB; (c) studies to examine the importance of personality traits and intention-to-change of health-care professionals; (d) studies to improve outcomes for patients by increasing physicians' knowledge, skills and attitudes towards EBP; (e) integration of theories of behavior-change into education programs and to measure the effect on clinical competence.

Conclusion

Randomized controlled trials, non-randomized controlled trials and before-after studies showed a small improvement in knowledge, skills, attitudes and behavior following EBP, together with a large improvement in knowledge and skills when measured as a total test score. However, the quality of the evidence precludes practical recommendations to be introduced in EBP education in postgraduate health-care professionals. More research into education in medicine is needed. Greater collaboration with organizations and individuals interested in preserving standards in academic medicine is required. Programs of training health-care professionals have responsibility for education and research. These programs must stimulate interest in EBP education and must evaluate these interventions. EBP education and other types of medical education interventions should be evaluated in a similar manner as that expected for interventions such as drug therapy or diagnostic studies.

References

  1. Grol R, Grimshaw J: From best evidence to best practice: effective implementation of change in patients' care. Lancet. 2003, 362: 1225-1230. 10.1016/S0140-6736(03)14546-1.

    Article  PubMed  Google Scholar 

  2. Brown LC, Johnson JA, Majumdar SR, Tsuyuki RT, McAlister FA: Evidence of suboptimal management of cardiovascular risk in patients with type 2 diabetes mellitus and symptomatic atherosclerosis. CMAJ. 2004, 171: 1189-1192.

    Article  PubMed  PubMed Central  Google Scholar 

  3. McGlynn EA, Asch SM, Adams J, Keesey J, Hicks J, DeCristofaro A, Kerr EA: The quality of health care delivered to adults in the United States. N Engl J Med. 2003, 348: 2635-2645. 10.1056/NEJMsa022615.

    Article  PubMed  Google Scholar 

  4. Leape LL, Weissman JS, Schneider EC, Piana RN, Gatsonis C, Epstein AM: Adherence to practice guidelines: the role of specialty society guidelines. Am Heart J. 2003, 145: 19-26. 10.1067/mhj.2003.35.

    Article  PubMed  Google Scholar 

  5. Avezum A, Cavalcanti AB, Sousa AG, Farsky PS, Knobel M: [Adjuvant therapy in acute myocardial infarction: evidence based recommendations]. Rev Assoc Med Bras. 2000, 46: 363-368. 10.1590/S0104-42302000000400038.

    Article  CAS  PubMed  Google Scholar 

  6. Barzansky B, Etzel SI: Educational programs in US medical schools, 2004-2005. JAMA. 2005, 294: 1068-1074. 10.1001/jama.294.9.1068.

    Article  CAS  PubMed  Google Scholar 

  7. Medicine I: Health Professions Education: A bridge to quality. Edited by: Press NA. 2003, Washington DC

    Google Scholar 

  8. Hatala R, Guyatt G: Evaluating the teaching of evidence-based medicine. JAMA. 2002, 288: 1110-1112. 10.1001/jama.288.9.1110.

    Article  PubMed  Google Scholar 

  9. Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS: Evidence based medicine: what it is and what it isn't. BMJ. 1996, 312: 71-72.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  10. Coomarasamy A, Khan KS: What is the evidence that postgraduate teaching in evidence based medicine changes anything? A systematic review. BMJ. 2004, 329: 1017-10.1136/bmj.329.7473.1017.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Norman GR, Shannon SI: Effectiveness of instruction in critical appraisal (evidence-based medicine) skills: a critical appraisal. CMAJ. 1998, 158: 177-181.

    CAS  PubMed  PubMed Central  Google Scholar 

  12. Green ML: Graduate medical education training in clinical epidemiology, critical appraisal, and evidence-based medicine: a critical review of curricula. Acad Med. 1999, 74: 686-694. 10.1097/00001888-199906000-00017.

    Article  CAS  PubMed  Google Scholar 

  13. Taylor S, Muncer S: Redressing the power and effect of significance. A new approach to an old problem: teaching statistics to nursing students. Nurse Educ Today. 2000, 20: 358-364. 10.1054/nedt.2000.0429.

    Article  CAS  PubMed  Google Scholar 

  14. Reed D, Price EG, Windish DM, Wright SM, Gozu A, Hsu EB, Beach MC, Kern D, Bass EB: Challenges in systematic reviews of educational intervention studies. Ann Intern Med. 2005, 142: 1080-1089.

    Article  PubMed  Google Scholar 

  15. Shaneyfelt T, Baum KD, Bell D, Feldstein D, Houston TK, Kaatz S, Whelan C, Green M: Instruments for evaluating education in evidence-based practice: a systematic review. JAMA. 2006, 296: 1116-1127. 10.1001/jama.296.9.1116.

    Article  CAS  PubMed  Google Scholar 

  16. Rosenthal: Meta-analytic procedures for combining studies with multiple effect sizes. Psychological Bulletin. 1986, 99: 400-406. 10.1037/0033-2909.99.3.400.

    Article  Google Scholar 

  17. J. DG: The Effect Size Generator. 2004, [http://www.swin.edu.au/victims]

    Google Scholar 

  18. Taylor RS, Reeves BC, Ewings PE, Taylor RJ: Critical appraisal skills training for health care professionals: a randomized controlled trial [ISRCTN46272378]. BMC Med Educ. 2004, 4: 30-10.1186/1472-6920-4-30.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Cheng GY: Educational workshop improved information-seeking skills, knowledge, attitudes and the search outcome of hospital clinicians: a randomised controlled trial. Health Info Libr J. 2003, 20 Suppl 1:22-33.: 22-33. 10.1046/j.1365-2532.20.s1.5.x.

    Article  Google Scholar 

  20. Forsetlund L, Bradley P, Forsen L, Nordheim L, Jamtvedt G, Bjorndal A: Randomised controlled trial of a theoretically grounded tailored intervention to diffuse evidence-based public health practice [ISRCTN23257060]. BMC Med Educ. 2003, 3: 2-10.1186/1472-6920-3-2.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Linzer M, Brown JT, Frazier LM, DeLong ER, Siegel WC: Impact of a medical journal club on house-staff reading habits, knowledge, and critical appraisal skills. A randomized control trial. JAMA. 1988, 260: 2537-2541. 10.1001/jama.260.17.2537.

    Article  CAS  PubMed  Google Scholar 

  22. Macrae HM, Regehr G, McKenzie M, Henteleff H, Taylor M, Barkun J, Fitzgerald GW, Hill A, Richard C, Webber EM, McLeod RS: Teaching practicing surgeons critical appraisal skills with an Internet-based journal club: A randomized, controlled trial. Surgery. 2004, 136: 641-646. 10.1016/j.surg.2004.02.003.

    Article  PubMed  Google Scholar 

  23. Schilling K, Wiecha J, Polineni D, Khalil S: An interactive web-based curriculum on evidence-based medicine: design and effectiveness. Fam Med. 2006, 38: 126-132.

    PubMed  Google Scholar 

  24. Stevermer JJ, Chambliss ML, Hoekzema GS: Distilling the literature: a randomized, controlled trial testing an intervention to improve selection of medical articles for reading. Acad Med. 1999, 74: 70-72. 10.1097/00001888-199901000-00021.

    Article  CAS  PubMed  Google Scholar 

  25. Cabell CH, Schardt C, Sanders L, Corey GR, Keitz SA: Resident utilization of information technology. J Gen Intern Med. 2001, 16: 838-844. 10.1046/j.1525-1497.2001.10239.x.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  26. Smith CA, Ganschow PS, Reilly BM, Evans AT, McNutt RA, Osei A, Saquib M, Surabhi S, Yadav S: Teaching residents evidence-based medicine skills: a controlled trial of effectiveness and assessment of durability. J Gen Intern Med. 2000, 15: 710-715. 10.1046/j.1525-1497.2000.91026.x.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  27. Villanueva EV, Burrows EA, Fennessy PA, Rajendran M, Anderson JN: Improving question formulation for use in evidence appraisal in a tertiary care setting: a randomised controlled trial [ISRCTN66375463]. BMC Med Inform Decis Mak. 2001, 1: 4-10.1186/1472-6947-1-4.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  28. Haynes RB, Johnston ME, McKibbon KA, Walker CJ, Willan AR: A program to enhance clinical use of MEDLINE. A randomized controlled trial. Online J Curr Clin Trials. 1993, Doc No 56: 4005.

    Google Scholar 

  29. Green ML, Ellis PJ: Impact of an evidence-based medicine curriculum based on adult learning theory. J Gen Intern Med. 1997, 12: 742-750. 10.1046/j.1525-1497.1997.07159.x.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  30. FU: Is a journal club effective for teaching critical appraisal skills?. Academic Psychiatry. 1999, 23: 205-209.

    Article  Google Scholar 

  31. Bazarian JJ, Davis CO, Spillane LL, Blumstein H, Schneider SM: Teaching emergency medicine residents evidence-based critical appraisal skills: a controlled trial. Ann Emerg Med. 1999, 34: 148-154. 10.1016/S0196-0644(99)70222-2.

    Article  CAS  PubMed  Google Scholar 

  32. Ross R, Verdieck A: Introducing an evidence-based medicine curriculum into a family practice residency--is it effective?. Acad Med. 2003, 78: 412-417. 10.1097/00001888-200304000-00019.

    Article  PubMed  Google Scholar 

  33. Akl EA, Izuchukwu IS, El Dika S, Fritsche L, Kunz R, Schunemann HJ: Integrating an evidence-based medicine rotation into an internal medicine residency program. Acad Med. 2004, 79: 897-904. 10.1097/00001888-200409000-00018.

    Article  PubMed  Google Scholar 

  34. Ibbotson T, Grimshaw J, Grant A: Evaluation of a programme of workshops for promoting the teaching of critical appraisal skills. Med Educ. 1998, 32: 486-491. 10.1046/j.1365-2923.1998.00256.x.

    Article  CAS  PubMed  Google Scholar 

  35. Kellum JA, Rieker JP, Power M, Powner DJ: Teaching critical appraisal during critical care fellowship training: a foundation for evidence-based critical care medicine. Crit Care Med. 2000, 28: 3067-3070. 10.1097/00003246-200008000-00065.

    Article  CAS  PubMed  Google Scholar 

  36. Fritsche L, Greenhalgh T, Falck-Ytter Y, Neumayer HH, Kunz R: Do short courses in evidence based medicine improve knowledge and skills? Validation of Berlin questionnaire and before and after study of courses in evidence based medicine. BMJ. 2002, 325: 1338-1341. 10.1136/bmj.325.7376.1338.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  37. Baum, D K: The impact of an Evidence-Based Medicine workshop on residents' attitudes towards and self-reported ability in evidence-based practice. 2003, 8: [http://www.med-ed-online.org/res00053.htm]

    Google Scholar 

  38. McCluskey A, Lovarini M: Providing education on evidence-based practice improved knowledge but did not change behaviour: a before and after study. BMC Med Educ. 2005, 5: 40-10.1186/1472-6920-5-40.

    Article  PubMed  PubMed Central  Google Scholar 

  39. Straus SE, Ball C, Balcombe N, Sheldon J, McAlister FA: Teaching evidence-based medicine skills can change practice in a community hospital. J Gen Intern Med. 2005, 20: 340-343. 10.1111/j.1525-1497.2005.04045.x.

    Article  PubMed  PubMed Central  Google Scholar 

  40. Dinkevich E, Markinson A, Ahsan S, Lawrence B: Effect of a brief intervention on evidence-based medicine skills of pediatric residents. BMC Med Educ. 2006, 6: 1-10.1186/1472-6920-6-1.

    Article  PubMed  PubMed Central  Google Scholar 

  41. Lucas BP, Evans AT, Reilly BM, Khodakov YV, Perumal K, Rohr LG, Akamah JA, Alausa TM, Smith CA, Smith JP: The impact of evidence on physicians' inpatient treatment decisions. J Gen Intern Med. 2004, 19: 402-409. 10.1111/j.1525-1497.2004.30306.x.

    Article  PubMed  PubMed Central  Google Scholar 

  42. Smith AD, South PK, Levander OA: Effect of gold(I) compounds on the virulence of an amyocarditic strain of coxsackievirus B3. Biol Trace Elem Res. 2001, 84: 67-80. 10.1385/BTER:84:1-3:067.

    Article  CAS  PubMed  Google Scholar 

  43. Ramos KD, Schafer S, Tracz SM: Validation of the Fresno test of competence in evidence based medicine. BMJ. 2003, 326: 319-321. 10.1136/bmj.326.7384.319.

    Article  PubMed  PubMed Central  Google Scholar 

  44. Kinzie MB: Instructional design strategies for health behavior change. Patient Educ Couns. 2005, 56: 3-15. 10.1016/j.pec.2004.02.005.

    Article  PubMed  Google Scholar 

  45. Straus SE, Green ML, Bell DS, Badgett R, Davis D, Gerrity M, Ortiz E, Shaneyfelt TM, Whelan C, Mangrulkar R: Evaluating the teaching of evidence based medicine: conceptual framework. BMJ. 2004, 329: 1029-1032. 10.1136/bmj.329.7473.1029.

    Article  PubMed  PubMed Central  Google Scholar 

  46. Boon H, Stewart M: Patient-physician communication assessment instruments: 1986 to 1996 in review. Patient Educ Couns. 1998, 35: 161-176. 10.1016/S0738-3991(98)00063-9.

    Article  CAS  PubMed  Google Scholar 

  47. Egger M, Juni P, Bartlett C, Holenstein F, Sterne J: How important are comprehensive literature searches and the assessment of trial quality in systematic reviews? Empirical study. Health Technol Assess. 2003, 7: 1-76.

    Article  CAS  PubMed  Google Scholar 

Pre-publication history

Download references

Acknowledgements

We thank the Foundation IDIAP Jordi Gol for financial help in the English language translation and editing of the manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Josep M Argimon.

Additional information

Competing interests

The author(s) declare that they have no competing interests.

Authors' contributions

JMA and GFM were equally responsible for design, quality assessment, review of studies, and preparation of the manuscript.

Electronic supplementary material

12913_2007_460_MOESM1_ESM.doc

Additional file 1: Characteristics of studies assessing effectiveness of teaching critical appraisal skills. this table provided information about the assessing effectiveness of teaching critical appraisal skills. (DOC 122 KB)

12913_2007_460_MOESM2_ESM.doc

Additional file 2: Quality criteria for evaluating studies. This table ass the quality criteria of studies included in the systematic review. (DOC 122 KB)

Authors’ original submitted files for images

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Flores-Mateo, G., Argimon, J.M. Evidence based practice in postgraduate healthcare education: A systematic review. BMC Health Serv Res 7, 119 (2007). https://doi.org/10.1186/1472-6963-7-119

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1472-6963-7-119

Keywords