Skip to main content
  • Research article
  • Open access
  • Published:

Resource use data by patient report or hospital records: Do they agree?

Abstract

Background

Economic evaluations alongside clinical trials are becoming increasingly common. Cost data are often collected through the use of postal questionnaires; however, the accuracy of this method is uncertain. We compared postal questionnaires with hospital records for collecting data on physiotherapy service use.

Methods

As part of a randomised trial of orthopaedic medicine compared with orthopaedic surgery we collected physiotherapy use data on a group of patients from retrospective postal questionnaires and from hospital records.

Results

315 patients were referred for physiotherapy. Hospital data on attendances was available for 30% (n = 96), compared with 48% (n = 150) of patients completing questionnaire data (95% Cl for difference = 10% to 24%); 19% (n = 59) had data available from both sources. The two methods produced an intraclass correlation coefficient of 0.54 (95% Cl 0.31 to 0.70). However, the two methods produced significantly different estimates of resource use with patient self report recalling a mean of 1.3 extra visits (95% Cl 0.4 to 2.2) compared with hospital records.

Conclusions

Using questionnaires in this study produced data on a greater number of patients compared with examination of hospital records. However, the two data sources did differ in the quantity of physiotherapy used and this should be taken into account in any analysis.

Peer Review reports

Background

Concurrent economic evaluation alongside clinical trials is an increasingly used method for undertaking health related cost-effectiveness studies. The collection of resource use data is a prerequisite for these evaluations. There are essentially three ways to collect resource use data. Data can be collected from clinicians, patients and routine medical records. A drawback of collecting data from clinicians, through clinical proforma or case report, is that they cannot be blinded to patient participation in the study, increasing the chance of introducing a biased assessment of outcome. Disadvantages of collecting data from medical records include: access to the relevant medical records may be difficult particularly if treatment is spread over a number of different providers; the high cost in terms of research time to access such records; the accuracy of some data collection systems may be questionable; finally, some resource use data such as time off work, or over the counter medicines, may not be available from medical records.

The alternative of asking the patient for their resource use is attractive, as relevant questions can be readily included within patient self completed questionnaires. Consequently such a method is easier and less costly form of data collection compared with medical records. The disadvantages to using patient completed questionnaires, both prospective and retrospective, concern their accuracy. The accuracy of self completed questionnaires is affected by recall error, item completion rates and questionnaire response rates. Hence, patients may report inaccurate levels of resource use, they may not complete the relevant resource use questions consistently if at all and finally, some patients will not return their questionnaires.

There has been some previously published work looking at the accuracy of patient reported resource use data, but the results have been somewhat contradictory. Good agreement has been demonstrated in reports of specialised diagnostic procedures [1], hospital admission and specialist consultation [2]. These same studies showed over reporting of clinic attendance [1] and blood testing [2]. Similarly, another study showed that agreement between patient report and medical records was higher for procedures that generated a test report than for those that were documented using a physician note [3]. However, studies have also reported major discrepancies between medical records and patient recall [4, 5], with others reporting that patient self-report tended to systematically underestimate resource use compared with medical records [6, 7]. A previous study of physiotherapy attendance showed fair agreement between patient interviews and insurance registers [8].

As part of an economic evaluation conducted alongside a randomised controlled trial in the field of orthopaedics, we identified a priori, physiotherapy as an area of resource use likely to differ between the study arms. This trial, conducted with local research ethics committee approval between December 1993 and December 1994 and reported elsewhere, [9] compared care from an orthopaedic medicine specialist (OM) with care from conventional orthopaedic surgeon led services (OS) for the management of non-surgical orthopaedic patients. The largest proportion of physiotherapy referrals was expected to be carried out at the orthopaedic unit in which the trial was taking place. We therefore planned to use the hospital physiotherapy department's records to provide estimates of resource use with patient reports supplementing hospital records (for example, if a patient had received physiotherapy from a distant facility). However, once the study was underway we found that the hospital physiotherapy department's system of record keeping using physiotherapist diaries, meant that individual patient data were difficult and time consuming to access. We also found that more patients than expected were being referred to physiotherapists in the community, at health centres or to other hospitals. In total patients were referred to 31 different centres for their physiotherapy which meant much of this data was impossible to access due to the logistical and financial constraints of the study. We did, however, succeed in obtaining hospital data for a sample of patients, and this gave us the opportunity to compare the agreement of patient and hospital reported data for the estimation of total resource use. Therefore, the aim of this paper is to compare the results of retrospective patient self-reported physiotherapy use with routinely collected hospital record data.

Methods

Patients included in the study were all over 18 years and had all been referred to their local orthopaedic hospital with a musculoskeletal condition, which was classified by existing hospital procedures as unlikely to require surgery. The sample was approximately 50% male, had a mean age of 45 years and 38% (n = 315) were referred for physiotherapy.

Patient self-report (PSR)

Physiotherapy resource use questions were included in a postal follow-up questionnaire mailed to patients 3 and 12 months after their initial outpatient appointment. Patients were asked firstly if they had had any NHS physiotherapy for their condition over the past three months; if the answer was 'yes', they were then asked to give the date and location of hospital or clinic, for each treatment session they attended.

Outpatient department records (ODR)

Referral for NHS physiotherapy was recorded for all patients in their outpatient department notes, together with the location of the referral. This gave definitive data on physiotherapy referrals, but no details of actual attendances.

Physiotherapy department records (PDR)

A systematic search of the physiotherapy department records of the hospital, in which the study was taking place, enabled us to retrieve a sample of referral histories. In addition, a small number of external physiotherapy clinics routinely reported back to the hospital with the number and dates of patients' treatment sessions.

The sample of patients for the reliability exercise was selected if their physiotherapy data fulfilled the following four criteria:

  • PSR and ODR data indicated that physiotherapy had been prescribed;

  • The two data sources agreed about the location of the clinic supplying treatment;

  • Numbers of attendances were available from both PSR and PDR;

  • The time frame of the PDR data matched that of the PSR data (ie the first and fourth 3 month periods post-consultation).

The rate of agreement was measured by an intraclass correlation coefficient (ICC) which assesses the conformity between two quantitative measurements, reporting the proportion of the variability due to variation among subjects. This was calculated from an analysis of variance (ANOVA) two-way random effects model [10]. This was conducted on log transformed data, to correct for non-normality. In addition the calculation of limits of agreement and a paired t-test were also used [11].

Results

Figure 1 summarises the responses of the two methods of data collection. From the 315 patients referred for physiotherapy the main source of loss for the PDR data was the inability to trace patients referred to outlying clinics. For the PSR data the actual questionnaire response rate compares favourably with similar studies [12], but 26% of the patients referred for physiotherapy did not report attendance. However, 17 (21%) of these subjects did have attendance data available from PDR. PSR data on physiotherapy attendance was available for 17% more patients than PDR data (95% Cl 10% to 24%).

Figure 1
figure 1

Physiotherapy attendance data collection

There were 59 (19%) patients with data available from both sources; four of these cases only had PDR data for the 6 month period not covered by the follow-up questionnaire; 3 differed in the reported location of the physiotherapy clinic; leaving 17% (n = 52) of patients fulfilling the four criteria for inclusion in the reliability exercise.

The ICC of 0.54 (95% Cl 0.31 to 0.70) indicates reasonable agreement between PDR and PSR, as would be expected with two methods of measuring the same quantity. However in this situation the ICC is a relative measure and should be assessed with respect to the other methods of assessing agreement.

Forty eight percent (n = 25) of the study group report fewer visits with PDR compared with PSR; with a mean of 1.3 (95% Cl 0.4 to 2.2) fewer visits reported in physiotherapy department notes (Table). Figure 2 also illustrates this point with the majority of the points falling below zero, indicating a lower number of visits for PDR. In addition the limits of agreement, denoting the range in which 95% of the differences should lie, stretches from eight fewer visits for PDR to five more visits. This would indicate that the level of agreement is inadequate. Although the distribution of the differences is approximately Normally distributed, there may be a tendency for the difference to increase with the mean of the two scores. A log transformation was carried out, but this had no effect on the relationship.

Figure 2
figure 2

Scatterplot of the difference in reported visits by mean reported visits for the two methods

Table Number of physiotherapy visits reported

The table also shows that the difference in reporting observed between the two methods may not be the same for both trial groups, but the small numbers in the OS group make this difficult to say for certain.

Discussion

This study has shown that there are differences in reported use of physiotherapy between patient completed questionnaire and hospital records. In contrast, with studies reporting underestimation of resource use [6, 7], our study suggests that self reported data gives a higher estimate of resource use compared with hospital records. These findings agree with other studies looking at resource use for routine health service contacts [1, 2].

Assuming that most of those referred for physiotherapy receive it, the study suggests patient self-report is more accurate than physiotherapy department notes in estimating the number of patients attending for physiotherapy treatment, although even this gives a lower estimate than we would expect considering the actual number of referrals made. In addition, we have also shown that differences in estimates of the number of physiotherapy sessions attended can occur when patient self-reported data and hospital records are compared. In this instance, even for patients for whom hospital and self-report agreed that physiotherapy treatment had taken place we cannot be sure that PDR data represents the more reliable measure as there is some doubt as to whether all the physiotherapy data were either recorded at all or recorded in such a form that it was easily accessible to the researcher. This might explain the discrepancy between the two methods. It is also possible that recall problems led patients to report additional visits outside the reference period of three months.

However, even if the hospital estimates of the number of physiotherapy visits per patient were more accurate, the use of patient reported data is not likely to bias the results of our evaluation unless there were systematic differences between the trial groups in terms of accuracy. In the table there is some evidence that this could be happening, with patients allocated to OM reporting on average one extra visit per patient when compared with the OS group. It would be prudent, if necessary, to undertake a sensitivity analysis using both estimates to ensure that the two data sources do not lead to different estimates of physiotherapy costs.

Whilst the differences between the two methods of data collection are statistically significant, they may not be of economic significance. For example, if in a sensitivity analysis it were shown that the results of the cost effectiveness analysis were not affected by substantial changes in physiotherapy use as was the case in this study [9], then a difference of approximately 1 visit per patient would be of no significance. However, if physiotherapy were to prove to be one of the dominant costs in the evaluation then the addition or subtraction of one extra visit per patient could easily be of economic significance.

The use of sensitivity analyses in these situations would also improve the generalisability of the findings. Where levels of nonresponse are high these may be combined with multiple imputation methods [13] to assess the results with respect to both uncertainty round resource use and their costs.

Conclusions

This study has shown that there can be a disparity between hospital records and patient self-reported data with neither source necessarily being the more accurate. We would recommend that in similar situations, the decision on which data source to use be based upon a pilot study comparing self-report and hospital data against a rigorously collected 'gold standard'. We would also recommend that where data from both sources are available and there are material differences in resource use, their impact be assessed using sensitivity analysis.

References

  1. Brown-Betz J, Adams ME: Patients as reliable reporters of medical care process. Medical Care. 1992, 30: 400-411.

    Article  Google Scholar 

  2. Ungar WJ, Coyte PC, et al: Health services utilization reporting in respiratory patients. J Clin Epidemiol. 1998, 51: 1335-1342. 10.1016/S0895-4356(98)00117-6.

    Article  CAS  PubMed  Google Scholar 

  3. Gordon NP, Hiatt RA, Lampert Dl: Concordance of self-reported data and medical record audit for six cancer screening procedures. Journal of the National Cancer Institute. 1993, 85: 566-570.

    Article  CAS  PubMed  Google Scholar 

  4. Fowles JB, Fowler E, Craft C, McCoy CE: Comparing claims data with medical record for Pap smear rates. Evaluation & the Health Professions. 1997, 20: 324-342.

    Article  CAS  Google Scholar 

  5. McKinnon ME, Vickers MR, Ruddock VM, Townsend J, Meade TW: Community Studies of the Health Service Implications of Low Back Pain. Spine. 1997, 22: 2161-2166. 10.1097/00007632-199709150-00014.

    Article  CAS  PubMed  Google Scholar 

  6. Roberts R, Bergastralh E, Schmitt , Jacobsen S: Comparison of self-reported and medical record health care utilisation measures. J Clin Epidemiol. 1996, 49: 989-995. 10.1016/0895-4356(96)00143-6.

    Article  CAS  PubMed  Google Scholar 

  7. Jobe J, White A, Kelly C, Mingay D, Sanchez M, Loftus E: Recall strategies and memory for health-care visits. The Millbank Quarterly. 1990, 68: 171-189.

    Article  CAS  Google Scholar 

  8. Reijneveld SA: The cross-cultural validity of self-reported use of health care: A comparison of survey and registration data. J Clin Epidemiol. 2000, 53: 267-272. 10.1016/S0895-4356(99)00138-9.

    Article  CAS  PubMed  Google Scholar 

  9. Leigh Brown AP, Kennedy ADM, Torgerson DJ, et al: The OMENS trial: opportunistic evaluation of musculo-skeletal physician care among orthopaedic outpatients unlikely to require surgery. Health Bulletin. 2001, 59: 198-210.

    Google Scholar 

  10. Shrout PE, Fleiss JL: Intraclass correlations: Uses in assessing rater reliability. Psychological Bulletin. 1979, 86 (2): 420-428. 10.1037//0033-2909.86.2.420.

    Article  CAS  PubMed  Google Scholar 

  11. Bland JM, Altman DG: Statistical methods for assessing agreement between two methods of clinical measurement. Lancet. 1986, i (8476): 307-310.

    Article  Google Scholar 

  12. Dillman DA: Mail and telephone surveys the Total Design Method. New York, Wiley. 1978

    Google Scholar 

  13. Rubin DB: Multiple imputation for nonresponse in surveys. New York, John Wiley & Sons. 1987

    Google Scholar 

Pre-publication history

Download references

Acknowledgements

We would like to thank Helen Lawrie and Lynda Davidson for their help in collecting the data for this study and the referees for their useful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andrew DM Kennedy.

Additional information

Competing interests

None declared.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Rights and permissions

Reprints and permissions

About this article

Cite this article

Kennedy, A.D., Leigh-Brown, A.P., Torgerson, D.J. et al. Resource use data by patient report or hospital records: Do they agree?. BMC Health Serv Res 2, 2 (2002). https://doi.org/10.1186/1472-6963-2-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1472-6963-2-2

Keywords