Open Access Open Badges Research article

Multiple tutorial-based assessments: a generalizability study

Christina St-Onge14*, Eric Frenette2, Daniel J Côté1 and André De Champlain3

Author Affiliations

1 Faculté de médecine et des sciences de la santé, Université de Sherbrooke, Sherbrooke, Canada

2 Faculté des sciences de l’éducation, Université Laval, Québec, Canada

3 Research and Development, Medical Council of Canada, Ottawa, Canada

4 Chaire de recherche en pédagogie médicale de la Société des médecins de l’Université de Sherbrooke, Centre de pédagogie des sciences de la santé, Faculté de médecine et des sciences de la santé, Université de Sherbrooke, 3001, 12e Avenue Nord Sherbrooke, Sherbrooke, QC J1R 5 N4, Canada

For all author emails, please log on.

BMC Medical Education 2014, 14:30  doi:10.1186/1472-6920-14-30

Published: 15 February 2014



Tutorial-based assessment commonly used in problem-based learning (PBL) is thought to provide information about students which is different from that gathered with traditional assessment strategies such as multiple-choice questions or short-answer questions. Although multiple-observations within units in an undergraduate medical education curriculum foster more reliable scores, that evaluation design is not always practically feasible. Thus, this study investigated the overall reliability of a tutorial-based program of assessment, namely the Tutotest-Lite.


More specifically, scores from multiple units were used to profile clinical domains for the first two years of a system-based PBL curriculum.


G-Study analysis revealed an acceptable level of generalizability, with g-coefficients of 0.84 and 0.83 for Years 1 and 2, respectively. Interestingly, D-Studies suggested that as few as five observations over one year would yield sufficiently reliable scores.


Overall, the results from this study support the use of the Tutotest-Lite to judge clinical domains over different PBL units.

Assessment; G-study; Tutorial-based assessment; Programs of assessment