Email updates

Keep up to date with the latest news and content from BMC Medicine and BioMed Central.

Journal App

google play app store
Open Access Highly Accessed Research article

Graduates of different UK medical schools show substantial differences in performance on MRCP(UK) Part 1, Part 2 and PACES examinations

IC McManus*, Andrew T Elder, Andre de Champlain, Jane E Dacre, Jennifer Mollon and Liliana Chis

BMC Medicine 2008, 6:5  doi:10.1186/1741-7015-6-5

PubMed Commons is an experimental system of commenting on PubMed abstracts, introduced in October 2013. Comments are displayed on the abstract page, but during the initial closed pilot, only registered users can read or post comments. Any researcher who is listed as an author of an article indexed by PubMed is entitled to participate in the pilot. If you would like to participate and need an invitation, please email info@biomedcentral.com, giving the PubMed ID of an article on which you are an author. For more information, see the PubMed Commons FAQ.

Insufficient evidence to conclude that quality of teaching across UK medical schools differs

Roy L. Soiza   (2008-02-25 17:04)  University of Aberdeen email

The study by McManus et al [1] provides valuable information on the performance in postgraduate examinations of graduates of various UK medical schools. The authors feel that differences in quality of medical training across medical schools explain a significant proportion of the variability in the MRCP exam performance. However, I am concerned the authors do not acknowledge some important limitations that significantly undermine the reliability of this conclusion.

Firstly, they do not mention the limitations of using pre-admission academic qualifications as a measure of pre-admission academic ability. Since many medical students at all medical schools will have achieved the top grade in their A-level or Scottish Highers examinations, this measure is subject to a particularly severe ‘ceiling effect’. As such, it cannot be considered a completely reliable measure of academic ability for medical students, as those students with top grades that were offered places and chose to attend such institutions as Oxford or Cambridge may be of higher ability than those with top grades at any of the ‘lesser-performing’ universities. Nevertheless, pre-admission academic qualifications explained most of the variability (62%) in the difference between the performances of each school’s graduates, but this is potentially an underestimate due to the stated limitation.

Secondly, the outcome measures are results in postgraduate examinations, and the effect of postgraduate education is not measured in any meaningful way. This is important because it is well recognised that graduates from each school generally continue to work in close proximity to their school’s area early in their career. The quality and quantity of postgraduate education will differ across the country. For example, from my own recent experience I am aware that there is a post-graduate programme of teaching aimed specifically for the MRCP Part 1 exam at one of the highly performing institutions, but no such programme exists around one of the poorer performing ones. Also, there are a plethora of commercial and non-commercial courses aimed at improving performance at these exams, mostly based in large cities in England or, to a lesser extent, central Scotland. Access to these may be more difficult for those graduates working around some of the poorer performing centres such as Belfast, Dundee or Aberdeen, but this is not a reflection of undergraduate teaching at these universities.

The authors contend that their finding that ‘recency’ of graduation is correlated with exam results supports their conclusions on the impact of university teaching on performance, but surely this only suggests that the best graduates pass their exams early irrespective of the university they attended. Their suggestion that postgraduate education ‘dilutes the effect of undergraduate teaching’ implies that either postgraduate education is uniform across the country, or is worst around the better performing universities but both scenarios are unlikely. Furthermore, if undergraduate teaching is such an important independent predictor of future exam results, why are the results not correlated with the measures of teaching quality (Guardian assessments)? Also, why have curricular changes not resulted in noticeable shifts in performance with time across the universities? The simplest explanation is that any variability in quality and content of undergraduate teaching is not as important as the authors contend, and that powerful, uncorrected confounders in pre-admission ability and/or postgraduate education exist.

Whilst fully endorsing the authors’ desire to ensure universal quality of undergraduate medical teaching, I am concerned the authors’ conclusion that there are differences in the quality of medical teaching across UK medical schools is, at best, unproven. The study has understandably generated wider interest and the authors’ rather bold conclusions repeated (e.g. it is mentioned in this week’s British Medical Journal [2]). There is a risk that graduates from medical schools that have been labelled as performing poorly may now be disadvantaged in their careers, whilst these same medical schools may find it even more difficult to attract good candidates in the future. Based on the published analyses I think this would be both unfair and lamentable.

References:

1) McManus IC, Elder AT, de Champlain A, Dacre JE, Mollon J, Cris L. Graduates of different UK medical schools show substantial differences in performance on MRCP(UK) Part 1, Part 2 and PACES examinations. BMC Medicine 2008, 6:5

2) Mayor S. Doctors from different schools vary in later postgraduate exams. BMJ 2008, 336:347

Competing interests

I am an employee of the University of Aberdeen. I am a graduate of the University of Edinburgh.

top

Post graduate training effect

Alasdair Strachan   (2008-02-18 15:02)  South Yorkshire Foundation School email

McManus et al raise some interesting questions regarding how well different medical schools prepare their students for success in a specialty specific medical examination during their subsequest postgraduate training. They outline a number of significant background variables but do not seem to take into account the effect of postgradute training. Eighteen months of postgradute training prior to MRCP part 1 is a significant percentage of overall clinical experience especially for medical schools with longer preclinical periods. Three years of postgradute training prior to MRCP part 2 may equal the period of clinical experience at medical school. Analysis of data taking into account place of postgradute training would enhance this paper and may then make their conclusions that differences in performance exist between medical schools more robust. Further analysis with relation to other specialties would also be enlightening.

Competing interests

None

top

Post a comment