Email updates

Keep up to date with the latest news and content from BMC Medical Education and BioMed Central.

Open Access Research article

How was the intern year?: self and clinical assessment of four cohorts, from two medical curricula

Gillian Laven1*, Dorothy Keefe1, Paul Duggan2 and Anne Tonkin3

Author Affiliations

1 School of Medicine, Faculty of Health Sciences, the University of Adelaide, Adelaide, South Australia, Australia

2 School of Paediatrics and Reproductive Health, Faculty of Health Sciences, the University of Adelaide, Adelaide, South Australia, Australia

3 Medicine Learning and Teaching Unit, Faculty of Health Sciences, the University of Adelaide, Adelaide, South Australia, Australia

For all author emails, please log on.

BMC Medical Education 2014, 14:123  doi:10.1186/1472-6920-14-123

Published: 24 June 2014

Abstract

Background

Problem-based curricula have provoked controversy amongst educators and students regarding outcome in medical graduates, supporting the need for longitudinal evaluation of curriculum change. As part of a longitudinal evaluation program at the University of Adelaide, a mixed method approach was used to compare the graduate outcomes of two curriculum cohorts: traditional lecture-based ‘old’ and problem-based ‘new’ learning.

Methods

Graduates were asked to self-assess preparedness for hospital practice and consent to a comparative analysis of their work-place based assessments from their intern year. Comparative data were extracted from 692 work-place based assessments for 124 doctors who graduated from the University of Adelaide Medical School between 2003 and 2006.

Results

Self-assessment: Overall, graduates of the lecture-based curriculum rated the medical program significantly higher than graduates of the problem-based curriculum. However, there was no significant difference between the two curriculum cohorts with respect to their preparedness in 13 clinical skills. There were however, two areas where the cohorts rated their preparedness in the 13 broad practitioner competencies as significantly different: problem-based graduates rated themselves as better prepared in their ‘awareness of legal and ethical issues’ and the lecture-based graduates rated themselves better prepared in their ‘understanding of disease processes’.

Work-place based assessment: There were no significant differences between the two curriculum cohorts for ‘Appropriate Level of Competence’ and ‘Overall Appraisal’. Of the 14 work-place based assessment skills assessed for competence, no significant difference was found between the cohorts.

Conclusions

The differences in the perceived preparedness for hospital practice of two curriculum cohorts do not reflect the work-place based assessments of their competence as interns. No significant difference was found between the two cohorts in relation to their knowledge and clinical skills. However results suggest a trend in ‘communication with peers and colleagues in other disciplines’ (χ2 (3, N = 596) =13.10, p = 0.056) that requires further exploration. In addition we have learned that student confidence in a new curriculum may impact on their self-perception of preparedness, while not affecting their actual competence.

Keywords:
Evaluation/assessment; Clinical performance; Curriculum development/evaluation; Problem-based; Intern/house officer training; Competence