Email updates

Keep up to date with the latest news and content from BMC Medical Education and BioMed Central.

Open Access Research article

Fixed or mixed: a comparison of three, four and mixed-option multiple-choice tests in a Fetal Surveillance Education Program

Nathan Zoanetti1, Mark Beaves2, Patrick Griffin1 and Euan M Wallace3*

Author Affiliations

1 Assessment Research Centre, Melbourne Graduate School of Education, University of Melbourne, Parkville, Australia

2 Royal Australian and New Zealand College of Obstetricians and Gynaecologists, East Melbourne, Australia

3 Department of Obstetrics and Gynaecology, The Ritchie Centre, Monash Institute of Medical Research and Southern Clinical School, Monash University, Clayton, Australia

For all author emails, please log on.

BMC Medical Education 2013, 13:35  doi:10.1186/1472-6920-13-35

Published: 4 March 2013

Abstract

Background

Despite the widespread use of multiple-choice assessments in medical education assessment, current practice and published advice concerning the number of response options remains equivocal. This article describes an empirical study contrasting the quality of three 60 item multiple-choice test forms within the Royal Australian and New Zealand College of Obstetricians and Gynaecologists (RANZCOG) Fetal Surveillance Education Program (FSEP). The three forms are described below.

Methods

The first form featured four response options per item. The second form featured three response options, having removed the least functioning option from each item in the four-option counterpart. The third test form was constructed by retaining the best performing version of each item from the first two test forms. It contained both three and four option items.

Results

Psychometric and educational factors were taken into account in formulating an approach to test construction for the FSEP. The four-option test performed better than the three-option test overall, but some items were improved by the removal of options. The mixed-option test demonstrated better measurement properties than the fixed-option tests, and has become the preferred test format in the FSEP program. The criteria used were reliability, errors of measurement and fit to the item response model.

Conclusions

The position taken is that decisions about the number of response options be made at the item level, with plausible options being added to complete each item on both psychometric and educational grounds rather than complying with a uniform policy. The point is to construct the better performing item in providing the best psychometric and educational information.