Reliability of medical record abstraction by non-physicians for orthopedic research
1 Harvard Medical School, Harvard University, Boston, MA 02115, USA
2 Orthopedic and Arthritis Center for Outcomes Research, Department of Orthopedic Surgery, Brigham and Women’s Hospital, Boston, MA 02115, USA
3 Department of Biostatistics, Boston University School of Public Health, Boston, MA 02118, USA
4 Division of Rheumatology, Immunology and Allergy, Brigham and Women’s Hospital, Boston, MA 02115, USA
5 Medicine and Orthopedic Surgery, Harvard Medical School, Brigham and Women's Hospital, BC - 4th floor, 75 Francis Street, Boston, MA 02115, USA
BMC Musculoskeletal Disorders 2013, 14:181 doi:10.1186/1471-2474-14-181Published: 9 June 2013
Medical record review (MRR) is one of the most commonly used research methods in clinical studies because it provides rich clinical detail. However, because MRR involves subjective interpretation of information found in the medical record, it is critically important to understand the reproducibility of data obtained from MRR. Furthermore, because medical record review is both technically demanding and time intensive, it is important to establish whether trained research staff with no clinical training can abstract medical records reliably.
We assessed the reliability of abstraction of medical record information in a sample of patients who underwent total knee replacement (TKR) at a referral center. An orthopedic surgeon instructed two research coordinators (RCs) in the abstraction of inpatient medical records and operative notes for patients undergoing primary TKR. The two RCs and the surgeon each independently reviewed 75 patients’ records and one RC reviewed the records twice. Agreement was assessed using the proportion of items on which reviewers agreed and the kappa statistic.
The kappa for agreement between the surgeon and each RC ranged from 0.59 to 1 for one RC and 0.49 to 1 for the other; the percent agreement ranged from 82% to 100% for one RC and 70% to 100% for the other. The repeated abstractions by the same RC showed high intra-rater agreement, with kappas ranging from 0.66 to 1 and percent agreement ranging from 97% to 100%. Inter-rater agreement between the two RCs was moderate with kappa ranging from 0.49 to 1 and percent agreement ranging from 76% to 100%.
The MRR method used in this study showed excellent reliability for abstraction of information that had low technical complexity and moderate to good reliability for information that had greater complexity. Overall, these findings support the use of non-surgeons to abstract surgical data from operative notes.