Skip to main content

Delivery and use of individualised feedback in large class medical teaching

Abstract

Background

Formative feedback that encourages self-directed learning in large class medical teaching is difficult to deliver. This study describes a new method, blueprinted feedback, and explores learner’s responses to assess its appropriate use within medical science teaching.

Methods

Mapping summative assessment items to their relevant learning objectives creates a blueprint which can be used on completion of the assessment to automatically create a list of objectives ranked by the attainment of the individual student. Two surveys targeted medical students in years 1, 2 and 3. The behaviour-based survey was released online several times, with 215 and 22 responses from year 2, and 187, 180 and 21 responses from year 3. The attitude-based survey was interviewer-administered and released once, with 22 responses from year 2 and 3, and 20 responses from year 1.

Results

88-96% of learners viewed the blueprinted feedback report, whilst 39% used the learning objectives to guide further learning. Females were significantly more likely to revisit learning objectives than males (p = 0.012). The most common reason for not continuing learning was a ‘hurdle mentality’ of focusing learning elsewhere once a module had been assessed.

Conclusions

Blueprinted feedback contains the key characteristics required for effective feedback so that with further education and support concerning its use, it could become a highly useful tool for the individual and teacher.

Peer Review reports

Background

Feedback acts as a response to performance; correcting and reinforcing knowledge to minimise error. When feedback is used within teaching it can bring learners through the stages depicted in the conscious competence theory [1], particularly from the important stage of unconscious incompetence to conscious incompetence. With further self-directed learning, this will hopefully lead to conscious competence and corrected understanding; the fundamental purpose of feedback. Without development to this stage of learning and correction, the learner struggles to progress.

The cycle of learning proposed by Kolb [2] begins with the learner experiencing; “what do I know?”, which leads on to reflecting on the task; “what do I need to know?”. The cycle continues by thinking and conceptualising; “how much and how well do I understand?”, which when practiced effectively highlights areas that are partially understood, causing the learner to then correct their knowledge in the acting stage; “how can I take my learning further?”. The challenge lies in how the teacher can lead students individually through this cycle.

The literature suggests a number of feedback models that guide the individual through the cycle described by Kolb [2]. Models implemented within clinical teaching include Pendleton’s rules [3], ALOBA [4], the Chicago model [5], SET-GO [6], the SCOPME model [7] and the six-step problem-solving model [8]. With frequent opportunities for teacher-facilitated formative feedback during clinical training, these are appropriate feedback models to be used according to the teacher’s preferences and abilities. Furthermore, research is currently evaluating the benefits of feedforward interventions in the clinical setting to improve learner’s performance, through the learner focusing on internal standards and previous performance that they considered good practice [9]. However, these teacher-facilitated feedback models place a high demand on resources [10]. As a result, lecture-based medical teaching requires the development of alternative feedback models that meet the need to offer a universal feedback report to a large number of learners, whilst incorporating the key characteristics of effective feedback. Research suggests that for feedback to be effective it should have the characteristics of being: timely [11, 12], specific [13], non-evaluative [6, 14], individualised [15] and constructive [14, 16]. This study explores learner’s responses to blueprinted feedback. This is an innovative feedback model that follows summative assessment in medical science teaching, providing a tool which offers feedback to large cohorts. This model has been repeatedly used throughout our medical curriculum and was recently commended as an example of good practice in advice supplementary to Tomorrow’s Doctors[17], but has not been explained until now.

Blueprinted feedback

Learning objectives establish what is expected of the learner and can be used to help monitor the learner’s progress in achieving what is expected. Mapping summative assessment items to their relevant learning objectives creates a blueprint. This blueprint can be used on completion of the assessment to automatically create a report which lists the module learning objectives examined in the assessment, ranked according to the individual’s achievement of each specific objective (Figure 1). The blueprinted feedback model offers an automated yet personalised option for delivering feedback to large numbers of learners. Presenting information as learning objectives has two key advantages: 1) it protects the validity of assessment items for future use, and 2) the format also has the potential to encourage deeper learning; drawing the learner’s attention away from the extraneous contextual detail of assessment questions, and directing their reference to the learning objectives. This focuses future learning and revision on learning objectives, leading to competence across the medical curriculum.

Figure 1
figure 1

Example of a blueprinted feedback report received by an individual learner following a summative assessment.

Benefits for the learner

Historically, after a summative assessment often all that was fed back to students was a percentage pass mark. Blueprinted feedback offers a relevant report to learners at varying levels of competence. Through ranking the learning objectives according to achievement, it highlights strengths to the weak learner, and weaknesses to the strong learner. This form of constructive feedback affirms learners whilst stimulating self-correction.

Benefits for the teacher

Blueprinting saves time. It is quicker to blueprint a summative assessment once than meet with students of a large cohort individually. Secondly, blueprinting adds additional perspective. As well as ticks/crosses on an assessment paper, linking this to learning objectives is very powerful when meeting a tutee and trying to generalise potential areas of weakness. Finally, a ‘by product’ of blueprinting to generate the personalised reports is that blueprinting allows mapping of the objectives sampled in the current assessment. This can be used over several sessions to ensure that all objectives are fully assessed.

Highlighting the achievement of learning objectives in the feedback can also be used to offer a level of feedback to the teacher. A summary blueprinted report of the average performance for the whole cohort (Figure 2) gives the ability to generalise trends in the acquisition of objectives. As each assessment item is mapped to a learning objective and the teaching session it was taught in, it offers a clear picture to the teacher of what content was well communicated and understood by the learners. Whilst revealing areas of the curriculum that were thoroughly understood by the learners, this also highlights gaps in teaching, or the lack of resources available needed to support certain content. Further analysis of the cohort’s performance can reveal trends in weaker or stronger learners, providing insight into the cohort’s specific needs for future teaching.

Figure 2
figure 2

Example of a staff summary blueprinted feedback report for an entire cohort.

Aims

To describe a new computational procedure for automated delivery of individualised feedback to students following summative e-assessment. To evaluate the utility of the model by feedback from surveys of the student experience.

Methods

The feedback structure

The report (Figure 1) is specific, non-evaluative and individualised in its structure and format. Delivery can be via automated email or online network and triggered to be released immediately once the assessment is completed by the student, or delayed to be coincident with the release of assessment results as appropriate. With conflicting suggestions in the literature regarding the most effective time to offer feedback (i.e. immediately after the assessment, or later with the results), this model allows flexibility and the discretion of the faculty in the time of its release. The final characteristic of effective feedback is to be constructive. This is achieved by the electronic tags alongside each learning objective which link to the teaching session it was taught in, with additional supporting online resources available within the virtual learning environment. This guides the learner to correct or concrete knowledge, and complete the full cycle of learning.

We have also explored the efficacy of a report that uses a traffic light system to indicate the achievement of the learning objective. Fulfilled objectives are depicted by green traffic lights, partially fulfilled objectives light amber, and cause for concern light red. The e-assessment management software which facilitates this blueprinted feedback was developed in-house but is now available free as an Open Source Project at: http://rogo-oss.nottingham.ac.uk/.

All examinees within a cohort see the same objectives in their feedback report. However, the order of these objectives is ordered by personal performance on the questions linked to each objective – best at the top to worst at the bottom. The shape of the traffic light icons has also been changed to aid accessibility (i.e. colour blindness). The scale for determining which icon to display is shifted upwards as it is expected that most students will exceed 40-50%. Initially a straight 33%/66% split between red, amber and green was considered. However, it was thought inappropriate to award a green, and potentially psychologically very positive icon, for 67%. Instead the boundaries were shifted upwards to 50% and 80%.

We have also evaluated how well blueprinted feedback is received, understood and used. When analysing the extent that blueprinted feedback is utilised by learners currently studying undergraduate medicine, the following hypotheses were tested:

  1. 1.

    Learners highly value blueprinted feedback.

  2. 2.

    Learners use blueprinted feedback to assist in self-correction of knowledge.

Data collection and analysis

Responses from medical students in years 1 and 2 of training were collected through an online, behaviour-based survey. As responders chose to self-complete the survey there was a larger response the first time it was released (samples of 180-215 from a population of 250), but a smaller response the subsequent time (sample size 21-22). The second survey explored the attitudes concerning use of blueprinted feedback, and targeted three year groups through an interviewer-administered survey (samples of 20-22). As a result, the views of three year groups were recorded to show differences in attitude at each stage of study (Table 1).

Table 1 Sample size from both surveys

The total can only be summed in the attitude-based survey as responders from the behaviour-based survey overlap. The stage of study names are abbreviated from “Year x Semester y” to “Yr x S y”.

Questions were designed to avoid bias from order effect, acquiescence, central tendency and pattern answering [18, 19]. The threshold for statistical significance was set at p < 0.05. Parametricity was determined using Shapiro-Wilks test, with appropriate further statistical tests used dependent upon the data distribution, variance and type.

Results

The value of blueprinted feedback within medical education

The responders across the three year groups averaged the value of feedback as 4.19 on a 5 point Likert scale (1 = not useful, 5 = very useful). The importance placed on receiving feedback following summative assessment was reflected by 88-96% of the cohort viewing the specific modular blueprinted feedback reports as they are released.

Utilising blueprinted feedback to assist in self-correction

39% of the responders utilised the blueprinted feedback report to direct further learning and correction. There was a significant correlation (p = 0.012) between gender and level of use. 32% of the total female learners reflected upon both the red and amber light learning objectives, and revisited lecture content of both to improve knowledge. However, no males used the report to this depth, with 23% only revisiting the lecture content of learning objectives lit red. In summary, 77% males did not revisit any lecture content at all, compared with 45% of females who did not revisit any (Figure 3).

Figure 3
figure 3

Revisited traffic lights. Pie charts showing which traffic light objectives learners in years 2 and 3 chose to revisit lecture content for. N = 44.

Overall, 61% of learners did not use the blueprinted feedback report for its intended purpose, and when considering the attitudes of students across the three years of study it is clear that their evaluation of the usefulness of the report declined with progression through the course (Figure 4). Possible barriers hindering the use of the blueprinted report were explored, with learners stating that the main factors influencing their use of the feedback are that the module had finished, a lack of time, and laziness (Figure 5).

Figure 4
figure 4

Usefulness of feedback. Line graph showing how highly responders from the 2008 entry cohort rated the usefulness of blueprinted feedback across successive semesters. No data collected in Yr1 S2. N = 187, 180, 21.

Figure 5
figure 5

Reasons for not revisiting unachieved objectives. Stacked bar chart showing the primary and secondary reasons responders gave for not revisiting the learning objectives which they had performed poorly on. N = 44.

The motivation of learners was also addressed in the survey, with results showing that the mean assessment grade students considered to reflect an adequate performance was 63.1%. 58% of responders then suggested they would be more motivated to improve and utilise the feedback report if they performed 10% lower than the year’s average (calculated as 65%), rather than if they achieved 55%.

Discussion

Use of blueprinted feedback

Blueprinted, objective-based feedback is unique in its ability to offer highly individualised, specific feedback following summative assessment that includes a depth of quantitative details without compromising assessment item validity. 88-96% of the responders viewed the feedback reports, evidencing a desire for individualised feedback, whilst in other studies only 46% of learners sought the individualised feedback [20].

This study showed only 39% of the learners utilised the blueprinted feedback report as extensively as intended, suggesting there are hidden obstacles to use. The unfamiliarity of the report causes some learners to instantly reject the feedback, highlighting the concrete need for education on the unique blueprinted format to reduce cognitive loading. Research shows that teaching medical students to learn how to give and receive feedback in the first year of their training is effective in instilling the self-reflective practice that is needed for a life-long career [21]. Feedback is a form of reflective practice; a skill that doctors must develop to benefit patients [22]. The study shows responders view blueprinted feedback as less useful over time, which is likely to be linked to the lack of education corresponding to the feedback report. Integrating teaching into the medical curriculum on feedback and on applying all stages of the learning cycle appears necessary.

Feedback from this group of learners is also being implemented, such as delaying the timing of the delivery to coincide with the assessment result. Additional levels of quantitative detail that reflect the cohort’s performance on specific objectives are being integrated into the report, after considering how learners appear to be motivated by norm-referencing. The responders demonstrated that the standard they hold for themselves is influenced more by comparison with peer performance than internal standards. This supports the use of feedback that involves an external standard and motivation, rather than feedforward, which focuses on an internal standards and consequent motivation [9].

Challenges facing the efficacy of blueprinted feedback

Research suggests that the impact feedback has upon learning is complex, with meta-analysis showing that in one third of experiments feedback interventions reduce performance [23]. Our surveys highlighted a number of factors that inhibited continued learning and correction. The predominant reason learners gave, “the module has finished”, reveals a hurdle mentality in learners who prematurely close the cycle of learning, before reflecting upon their feedback from summative assessment. This supports a greater emphasis on spiral curricula together with early signposting to students of objectives that will be revisited and built upon.

As research has explored the complexity behind learner stimulation and self-directed action, conflicting theories have developed. White proposes undifferentiated grading systems that only indicate whether the learner has passed or failed, to effectively stimulate improvement by reducing comparison and peer-competition [24]. Conversely, Hewson demonstrates that specificity and facts are a key element of effective feedback [14]. Kluger proposes that stimulation to improve depends on the learner’s state of mind with relation to the activity; whether it was for pleasure or to avoid pain [9]. However this theory relies heavily on the learners individual mindset and focus, and so is difficult to incorporate in large-scale feedback as it requires similar amounts of personal interaction as other widely accepted feedback models (Pendleton’s rules [3], ALOBA [4], the Chicago model [5], SET-GO [6], the SCOPME model [7], and the six-step problem-solving model [8]).

The development of blueprinted feedback has provided a tool that can be utilised in education broader than medicine. Indeed, at the University of Nottingham blueprinted feedback has been used in Engineering since the 2010/11 session and is now being explored by other disciplines. It assists moving the educational research field forward by offering a model that provides automated universal feedback which is both individualised and specific following summative assessment. The main factor restricting the use of blueprinted feedback is the lack of integrated education in the curriculum regarding the application of feedback and the blueprinted format itself. However, the feedback report incorporates the key elements required to become highly useful to the learner, when appropriately used within large class teaching.

Conclusions

Medical students seek detailed feedback, but most do not use the feedback they receive effectively. Mapping the assessment items to the learning objectives enables incorporation of both qualitative and quantitative feedback, without compromising summative assessment bank questions. Thus the blueprinted report helps learners track their mastery of the objectives and in turn monitor the effectiveness of their learning, encouraging reflection on the learner’s strengths and weaknesses, while also guiding them towards further online resources. It may be possible to encourage reflection by providing a formative e-assessment where the blueprinted feedback would undergo blinded peer-review with the purpose of getting peer recommendations on how to improve performance. This would provide the opportunity for collaborative interaction and motivate students to spend time analysing the reports prior to a summative end of term e-assessment which could then test the same objectives but using different questions. In addition, a summary report for the entire cohort can present generalised trends to the teacher, to improve the future teaching of objectives.

References

  1. Howell WC, Fleishman EA: Human performance and productivity. Vol 2: Information processing and decision making. 1982, Erlbaum: Hillsdale New Jersey

    Google Scholar 

  2. Kolb D, Fry R: Theories of group processes. Toward an applied theory of experiential learning. Edited by: Cooper CL. 1972, London: John Wiley

    Google Scholar 

  3. Pendleton D, Schofield T, Tate P, Havelock P: The consultation: an approach to learning and teaching. 1984, Oxford: Oxford University Press

    Google Scholar 

  4. Silverman J, Draper J, Kurtz S: The Calgary-Cambridge approach to communication skills teaching. 1. Agenda-led outcome-based analysis of the consultation. Educ Gen Pract. 1996, 7: 288-299.

    Google Scholar 

  5. Brukner H, Altkorn D, Cook S, Quinn M, Mcnabb W: Giving effective feedback to medical students: a workshop for faculty and house staff. Med Teach. 1999, 21: 161-165. 10.1080/01421599979798.

    Article  Google Scholar 

  6. Chowdhury R, Kalu G: Learning to give feedback in medical education. Obstet Gynaecol. 2004, 6: 243-247. 10.1576/toag.6.4.243.27023.

    Article  Google Scholar 

  7. Standing Committee of Postgraduate Medical and Dental Education (SCOPME): Appraising doctors and dentists in training. 1996, London: DoH

    Google Scholar 

  8. Wall D: Giving feedback effectively. Teaching made easy: a manual for health professionals. Edited by: Mohanna K, Wall D, Cambers R. 2004, Oxford: Radcliffe Medical Press

    Google Scholar 

  9. Kluger A, van Dijk D: Feedback, the various tasks of the doctor, and the feedforward alternative. Med Educ. 2010, 44: 1166-1174. 10.1111/j.1365-2923.2010.03849.x.

    Article  Google Scholar 

  10. Molloy E: The feedforward mechanism: a way forward in clinical learning?. Med Educ. 2010, 44: 1157-1158. 10.1111/j.1365-2923.2010.03868.x.

    Article  Google Scholar 

  11. Race P: Making learning happen. 2005, London: SAGE Publications

    Google Scholar 

  12. Carter J: Instructional learner feedback: a literature review with implications for software development. Computer Teacher. 1984, 12: 53-55.

    Google Scholar 

  13. Van de Ridder M, Stokking K, Mcgaghie W, Cate O: What is feedback in clinical education?. Med Educ. 2008, 42: 189-197. 10.1111/j.1365-2923.2007.02973.x.

    Article  Google Scholar 

  14. Hewson M, Little M: Giving feedback in medical education: verification of recommended techniques. J Gen Intern Med. 1998, 13: 111-116. 10.1046/j.1525-1497.1998.00027.x.

    Article  Google Scholar 

  15. Burr S, Brodier E: Integrating feedback into medical education. Br J Hosp Med. 2010, 71: 520-523.

    Article  Google Scholar 

  16. McKimm J: Giving effective feedback. Br J Hosp Med. 2009, 70: 158-161.

    Article  Google Scholar 

  17. General Medical Council (GMC): Advice supplementary to Tomorrow’s Doctors 2009. Assessment in undergraduate medical education. 2011, London: GMC

    Google Scholar 

  18. Brace I: Questionnaire design: How to plan, structure and write survey material for effective market research. 2008, London: Kogan Page Ltd

    Google Scholar 

  19. Albaum G: The Likert scale revisited. JMRS. 1997, 39: 311-348.

    Google Scholar 

  20. Sinclair H, Cleland J: Undergraduate medical students: who seeks formative feedback?. Med Educ. 2007, 41: 580-582. 10.1111/j.1365-2923.2007.02768.x.

    Article  Google Scholar 

  21. Kruidering-Hall M, O’Sullivan P, Chou C: Teaching feedback to first-year medical students: Long-term skill retention and accuracy of student self-assessment. J Gen Intern Med. 2009, 24: 721-726. 10.1007/s11606-009-0983-z.

    Article  Google Scholar 

  22. Driessen E, van Tartwijk J, Dornan T: The self critical doctor: helping students become more reflective. BMJ. 2008, 336: 827-830. 10.1136/bmj.39503.608032.AD.

    Article  Google Scholar 

  23. Kluger A, Denisi A: The effects of feedback interventions on performance: a historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychol Bull. 1996, 119: 254-284.

    Article  Google Scholar 

  24. White C, Fantone J: Pass-fail grading: laying the foundation for self-regulated learning. Adv Health Sci Educ Theory Pract. 2010, 15: 469-477. 10.1007/s10459-009-9211-1.

    Article  Google Scholar 

Pre-publication history

Download references

Acknowledgements

The authors gratefully thank all those students and staff who provided feedback on their interaction with the blueprinting process.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Steven A Burr.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

SAB conceived the study and drafted the manuscript. SAB and EB designed the student surveys and interpreted the findings. EB conducted the student surveys, analysed the data and helped draft the manuscript. SW conceived, produced and refined the computational method. All authors read and approved the final manuscript.

Steven A Burr, Elizabeth Brodier contributed equally to this work.

Authors’ original submitted files for images

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Burr, S.A., Brodier, E. & Wilkinson, S. Delivery and use of individualised feedback in large class medical teaching. BMC Med Educ 13, 63 (2013). https://doi.org/10.1186/1472-6920-13-63

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1472-6920-13-63

Keywords