Skip to main content
  • Research article
  • Open access
  • Published:

Comparing hospital mortality – how to count does matter for patients hospitalized for acute myocardial infarction (AMI), stroke and hip fracture

Abstract

Background

Mortality is a widely used, but often criticised, quality indicator for hospitals. In many countries, mortality is calculated from in-hospital deaths, due to limited access to follow-up data on patients transferred between hospitals and on discharged patients. The objectives were to: i) summarize time, place and cause of death for first time acute myocardial infarction (AMI), stroke and hip fracture, ii) compare case-mix adjusted 30-day mortality measures based on in-hospital deaths and in-and-out-of hospital deaths, with and without patients transferred to other hospitals.

Methods

Norwegian hospital data within a 5-year period were merged with information from official registers. Mortality based on in-and-out-of-hospital deaths, weighted according to length of stay at each hospital for transferred patients (W30D), was compared to a) mortality based on in-and-out-of-hospital deaths excluding patients treated at two or more hospitals (S30D), and b) mortality based on in-hospital deaths (IH30D). Adjusted mortalities were estimated by logistic regression which, in addition to hospital, included age, sex and stage of disease. The hospitals were assigned outlier status according to the Z-values for hospitals in the models; low mortality: Z-values below the 5-percentile, high mortality: Z-values above the 95-percentile, medium mortality: remaining hospitals.

Results

The data included 48 048 AMI patients, 47 854 stroke patients and 40 142 hip fracture patients from 55, 59 and 58 hospitals, respectively. The overall relative frequencies of deaths within 30 days were 19.1% (AMI), 17.6% (stroke) and 7.8% (hip fracture). The cause of death diagnoses included the referral diagnosis for 73.8-89.6% of the deaths within 30 days. When comparing S30D versus W30D outlier status changed for 14.6% (AMI), 15.3% (stroke) and 36.2% (hip fracture) of the hospitals. For IH30D compared to W30D outlier status changed for 18.2% (AMI), 25.4% (stroke) and 27.6% (hip fracture) of the hospitals.

Conclusions

Mortality measures based on in-hospital deaths alone, or measures excluding admissions for transferred patients, can be misleading as indicators of hospital performance. We propose to attribute the outcome to all hospitals by fraction of time spent in each hospital for patients transferred between hospitals to reduce bias due to double counting or exclusion of hospital stays.

Peer Review reports

Background

Hospital quality indicators are utilized for the comparison of hospital performance and individual hospital monitoring as well as benchmarking health care services of provinces and countries [15]. A quality indicator based on patient outcomes has three essential elements: the medical diagnosis, the time to measured outcome (e.g. death, readmission, surgery), and the place of the outcome (e.g. hospital, home, institution). Mortality has been widely evaluated as a quality indicator [613].

Large variation in hospital ranking and outlier detection has been found when mortality measures were calculated by different methods [9, 1416]. An inherent problem with in-hospital mortality is that it reflects to a great degree hospital discharge practices [9, 16]. Hospitals discharging patients early may seem to perform better than hospitals with longer patient stay. For patients treated at more than one hospital (transferred patients), the outcome should be attributed to all involved hospitals [13]. However, double-counting of patients may introduce bias [13, 15].

A mortality-based indicator should include all-cause, in-and-out-of hospital deaths within a standardized follow-up period, e.g. 30 days. Data on in-hospital deaths is readily available, but obtaining data including out-of-hospital deaths and transfer information may be a challenge. Studies have found that for some medical conditions, the hospital profiles were similar when comparing mortality calculated from in-hospital deaths and in-and-out-of hospital deaths within 30 days (counting from start of admission, regardless of cause) [9, 17]. Others report differences depending on time, place and cause of death included for the mortality measurement [10, 15, 16, 1820]. However, for transferred patients, previous studies have attributed the outcome to the first or the last hospital in the chain of admissions or used single-hospital stays only [16, 18, 19]. To our knowledge, no previous study has attributed the outcome to all involved hospitals without double counting.

First time acute myocardial infarction (AMI), stroke and hip fracture are three common, serious and resource-demanding medical conditions. They were selected by the Norwegian Directorate for Health and Social Affairs for developing mortality as a quality indicator for Norwegian hospitals [21]. All permanent residents in Norway have a personal identification number (PIN) which enables linking between hospital data and official registers. This offers a unique opportunity to compare mortality measures that differ with respect to time and place of death and to study the impact of transfers at the national level.

The objectives of the present work were to: i) summarize time, place and cause of death for patients hospitalized with AMI, stroke and hip fracture, ii) compare risk-adjusted mortality measures based on both in-hospital deaths and in-and-out-of- hospital deaths, with and without patients transferred to other hospitals.

Methods

Data sources

We collected data from all 66 Norwegian hospitals that had acute admissions of AMI, stroke and hip fracture during 1997–2001. The data sources were: the Patient Administrative System (PAS) of each hospital which provided type of admission (acute or elective), primary and secondary diagnoses, time and date of admission, and time and date of discharge; the National Population Register which provided age, gender, and date of death; the Norwegian Causes of Death Register which provided date and cause of death. An in-house developed data extraction system semi-automatically collected the PAS data in an encrypted format [21]. Statistics Norway prepared an encrypted PIN for linking the data sources.

The study protocol for the development and evaluation of 30D as a quality indicator for Norwegian hospitals was submitted to the Regional Ethical Committee. Because the project was a study of quality with the use of existing administrative data, ethical approval was not necessary and regarded by the Committee as outside their mandate. The use of data was approved by the Norwegian Data Inspectorate and the Ministry of Health.

Inclusion and exclusion criteria

PAS records for AMI, stroke and hip fracture at each hospital were identified by the International Classification of Diseases (ICD) ICD-09 from 1997 to 1999 and ICD-10 thereafter [22]. The following admissions were included: first time AMI (ICD-9: 410; ICD-10: I21.0-I21.3), identified as being primary or secondary diagnoses; stroke (ICD-9: 431,434, 436; ICD-10: I61, I63, I64), identified as being primary diagnoses only; hip fracture (ICD-9: 820 with all subgroups; ICD-10: S72.0-S72.2), identified as being primary or secondary diagnoses. Only the first admission per calendar year per patient was selected. We included hospitals with a minimum of 20 admissions each year during the 5-year period.

Patients were excluded if <18 years for AMI and stroke and <65 years for hip fracture, if the admission was coded as dead on arrival, a non-acute case, readmission or admission for rehabilitation (when identified) and non-first time AMI for AMI patients. Since ICD-9 code 410 covers both first and secondary heart attack, a search for a previous admission to any Norwegian hospital for 410 was made back to 1994 to ensure first time AMI.

Study sample

Five hospitals were university hospitals, 16 were large, and 45 hospitals were small. A total of 179 293 PAS records of single admissions were identified. We excluded 4 766 (2.7%) records due to missing data, and retained174 527 records from 144 190 patients. For patients with two or more records we established a chain of hospital admissions if time from discharge to readmission or admission to another hospital was ≤24 hours (transferred patients). The use of the inclusion and exclusion criteria resulted in a total of 48 030 AMI patients from 55 hospitals, 47 854 stroke patients from 59 hospitals and 40 142 hip fracture patients from 58 hospitals.

Mortality measures

Three mortality measures were calculated by counting the number of all-cause deaths as follows:

  • Death within 30 days after first day of admission, occurring in-and-out-of hospital, including transferred patients by weighting the outcome to each hospital by the fraction of time (within the 30 day period) spent in each hospital (W30D).

  • Death within 30 days after first day of admission, occurring in-and-out-of hospital for patients admitted to one single hospital only (S30D).

  • Death within 30 days after first day of admission, occurring in-hospital only (IH30D). For transferred patients, time to death was counted from first day of each admission, i.e. previous hospitals in the chain of admissions counted the patient as survivor.

The various ways of counting are summarized in Table 1. Consider the case of a patient who was transferred from hospital 1 on day 10 and discharged from hospital 2 at day 25, i.e. day 15 in hospital 2. For W30D the outcome of alive is assigned a weight of 10/(10+15) for hospital 1 and 15/25 for hospital 2. For S30D, this patient is not included. For IH30D both hospitals are attributed the outcome of alive. What if the patient stayed 21 days in hospital no. 2 and then died? For W30D, the outcome of alive is assigned to each of the hospitals as the patient died 31 days after start of first admission; hospital 1 is weighted by 10/30 and hospital 2 is weighted by 20/30. This patient is still not included for S30D. For IH30D, the outcome of alive is assigned to hospital 1 whereas hospital 2 is assigned the outcome of death as the patient died 21 days after admission to this hospital.

Table 1 How the three different 30-day mortality measures (W30D, S30D and IH30D) account for deaths when place and time of death varies

Statistical analysis

Mean, counts and percentages were used to summarize the data. Numbers of deaths were counted for the time intervals ≤30, 31–90 and 91–365 days after start of first admission. The mean length of stay was calculated for each medical condition and for each hospital. Age was categorized as <50, 50–75 and >75 years for AMI and stroke patients and 65–75 and >75 years for hip fracture patients. Seriousness of medical condition was categorized according to the Clinical Criteria Disease Staging (CCDS) system [23] and pooled; for AMI: stages 3.1, 3.2, 3.3 stages 3.4-3.6 and stages 3.7-3.9; for hip fracture: stages 1.1–1.2 and stages 2.3-3.3 [21]. For stroke, seriousness was categorized as either infarction or haemorrhage. Place of death was identified as either during the first admission, death in a subsequent hospital or out-of-hospital death. We recorded when the underlying or any contributing cause of death matched the referral ICD-9 and/or ICD-10 codes.

Unadjusted (crude) mortalities were calculated as the proportion of deaths among all admissions or admission chains according to the definitions of W30D, S30D and IH30D. The adjusted mortalities were estimated by logistic regression models which, in addition to hospital, included the case-mix variables age, sex, and stage of disease. Age was continuous and modelled by B-splines [24]. The hospital regression coefficients were estimated as deviations from the mean of all hospitals [25]. A hospital with higher mortality than the average has a positive coefficient and a hospital with lower mortality than the average has a negative coefficient.

The hospitals were ranked according to mortality by each of the unadjusted mortality measures and by the coefficients from the logistic models. We compared the ranks of S30D and IH30D to that of W30D by the Spearman rank correlation coefficient and by the numbers of hospitals shifting rank. Shifts were categorized as none, minor (1–5 shifts), moderate (6–10 shifts), and major (>10 shifts). Correlations between W30D, S30D, IH30D and length of stay were also estimated. The absolute difference in rank between S30D and W30D and between IH30D and W30D were explored by analysis of variance (ANOVA) for the three hospital categories (university, large, small).

The hospitals were categorized as having high, medium or low mortality: Z-values lower than the 5-percentile (of the normal distribution) identified outlier hospitals with low mortality, Z-values above the 95-percentile identified outlier hospitals with high mortality, medium mortality: remaining hospitals. The association between change/no change in outlier status between S30D and W30D and between IH30D and W30D were explored by Fisher’s Exact tests for the three hospital categories.

C-statistic (area under the ROC Curve) was calculated as a measure of the models’ ability to predict mortality. In general, C-statistic values above 0.7 are considered acceptable [25].

The analyses were conducted using SAS Software, version 9.2 (SAS Institute, Inc, Cary, NC) and R, version 2.11.0 (free software available at http://www.r-project.org/).

Results

Patient characteristics

Disease and patient characteristics are summarized in Table 2. The majority of patients was admitted to one hospital only, while 4.8%-6.6% were transferred between hospitals. AMI constituted the largest patient group. These patients had shortest overall mean length of stay (8.6 days), the smallest proportion of females (38.0%) and the youngest patients. The stroke patients had the longest mean length of stay (14 days), half of the patients were females and 56.0% were >75 years. The hip fracture patients had the largest proportion of females (74.2%) and were older (79.9% >75 years).

Table 2 Number of hospitals, patient characteristics, time, place and number of deaths for each of the medical conditions

Time and place of death

After one year, 70.0-74.3% of the patients were alive (Table 2). The proportions of deaths within 30 days were 19.1% for AMI, 17.6% for stroke and 7.8% for hip fracture patients. Among the patients who died within 30 days, out-of-hospital deaths occurred for 11.1% (AMI), 16.5% (stroke) and 51.0% (hip fracture). Among those who died within one year, the highest proportion of in-hospitals deaths was for the AMI patients (60.5%) and lowest for the hip fracture patients (15.9%).

Cause of death

The proportion of deaths with similar referral and cause of death diagnoses was high within 30 days after admission for all three medical conditions (73.8-89.6%, Table 2). Within one year, this proportion was still high for AMI (58.1%) and stroke (73.5%), but considerably lower for the hip fracture patients (37.9%).

Transferred patients

It was not possible to deduce the reason for transferring patients between hospitals from the data. Few patients were transferred between three or more hospitals (AMI 59 patients, stroke 89 patients, hip fracture 49 patients). For transfers between two AMI hospitals, 29.6% and 25.5% of the transfers were from a small or a large hospital, respectively, to a university hospital (Table 3). The most frequent transfer was from a large to a small hospital for stroke (39.3%) and hip fracture patients (58.2%) (Table 3).

Table 3 Number of patients transferred from initial hospital to subsequent hospital, length of stay (days) at initial hospital (LOS1) and length of stay (days) at subsequent hospital (LOS2)

The mean length of stay at the initial hospital (LOS1) was shorter than at the subsequent hospital (LOS2) for the three medical conditions irrespective of hospital category with the exception of AMI patients transferred from large to university hospitals (mean LOS1=5.1 days versus LOS2=3.2 days) (Table 3). The mean length of stay at the subsequent hospital is considerably longer for all transferred stroke and hip fracture patients as compared to the AMI patients (Table 3).

Mortality measures

The unadjusted overall mortalities and range for individual hospitals are given in Table 4. The variation between hospitals was large within each mortality measure.

Table 4 Overall mortality (%) according to unadjusted measurement W30D, S30D and IH30D, ranges for individual hospitals

The adjusted mortality measures were highly correlated for AMI (0.82 ≤ r ≤ 0.94, Table 5) and stroke (0.78 ≤ r ≤ 0.91, Table 4). The correlations between the mortality measures and length of stay were strongest for hip fracture; W30D (r =−0.54) and S30D (r =−0.35).

Table 5 Spearman’s correlations between the adjusted 30-day mortality measures W30D, S30D and IH30D and mean length of stay (LOS)

In Figure 1, back-to-back barplots display the shifts and direction of shift, per shift category, for the hospital ranks when comparing S30D and IH30D to W30D, unadjusted (lower two rows) and case-mix adjusted (upper two rows), per medical condition. The ranking was highly influenced by the method of counting the number of deaths. For the comparisons of adjusted mortalities, no altered rank was seen for 5-9% of the hospitals. Most shifts were minor (77.0%-86%) when comparing S30D versus W30D (upper row 1, Figure 1). For IH30D versus W30D, 14% of the AMI, 17% of the stroke and 42% of the hip fracture hospitals had major (>10) shifts in rank (row 2 from top, Figure 1). Minor shifts in rank were seen for adjusted versus unadjusted measurements.

Figure 1
figure 1

Number of hospitals shifting rank and direction of shift when comparing the ranks obtained by mortality measures S30D and IH30D versus W30D per medical condition. Shifts are categorized as none, minor (1–5 shifts), moderate (6–10 shifts), and major (>10 shifts). The top bar on every plot shows the number of hospitals with no shift in rank. The empty bars to the right of the vertical axis show the number of hospitals shifting to better rank (lower mortality) when compared to W30D. The filled bars to the left of the vertical axis show the number of hospitals shifting to lower rank (higher mortality) when compared to W30D.

For AMI, the ANOVA indicated an association between hospital category and the mean absolute rank shift between S30D and W30D (p=0.09). No tendencies were observed for the other medical conditions nor for IH30D versus W30D (0.26≤ p≤0.94).

More hospitals changed outlier status between IH30D and W30D for AMI (18.2%) and stroke (25.4%) than between S30D and W30D (AMI 14.0%; stroke 17.0%), (Table 6). The largest change occurred for one stroke hospital which had low mortality according to W30D and high mortality according to S30D. The remaining shifts were from high or low to medium or vice versa. For hip fracture, no high nor low mortality hospital was identified by S30D whereas nine out of 14 hospitals shifted from high mortality (by W30D) to medium mortality by IH30D. Although non-significant, there was a tendency for an association between change in outlier status for S30D and hospital categories for AMI hospitals (Fisher’s exact test: p=0.06; 0.22≤ p≤0.80 for all other comparisons and medical conditions).

Table 6 Number of hospitals per outlier category for the 30-day adjusted mortality measures S30D and IH30D versus W30D

The C-statistics were acceptable for the various mortality measure models (ranges 0.726-0.729, 0.700-0.713 and 0.678– 0.694 for AMI, stroke and hip fracture, respectively).

Discussion

This study used data that included time, place and cause of death for patients admitted for AMI, stroke and hip fracture to all Norwegian hospitals during a 5-year period. We compared case-mix adjusted hospital mortality measures, based on in-and out-of-hospital deaths for patients admitted to one hospital only (S30D) and in-hospital deaths (IH30D) to that of in-and-out-of-hospital deaths accounting for transferred patients (W30D). Major shifts in hospital ranking and outlier detection occurred.

Time and place of death

Independently of place of death, the proportion of deaths within the standardized follow-up period of 30 days was considerably lower for the hip fracture patients compared to AMI and stroke patients, in accordance with previously reported studies [12, 19, 26, 27]. For diseases with a high proportion of deaths within 30 days, such as AMI and stroke, only minor changes might be expected in the hospital ranking and outlier status when comparing in-hospital deaths (IH30D) to the measures accounting for in-and-out-of hospital deaths (W30D and S30D) [26]. However, as much as 14%-17% of our hospitals had a major shift in rank for IH30D compared to W30D. Also, the change in outlier status was much higher than we expected for this comparison (AMI: 18.2%; stroke: 25.4%). This might be due to a fairly high proportion of out-of-hospital deaths within 30 days for the two patient groups (AMI: 11.1%; stroke: 16.5%). For hip fracture, the changes in shifts were much larger (42%) and the change in outlier status was also high (27.6%). This might be expected considering the lower short term mortality for these patients and the very large proportion of out-of-hospitals deaths (51.0%).

Follow-up care is important for patient outcome [11, 28, 29]. Variation in quality of follow-up care may explain some of the difference between in-hospital mortality and in-and-out-of-hospital mortality within 30 days. For hip fracture, the negative correlation between length of stay and W30D and S30D indicates a tendency towards better survival with longer hospital stay. This tendency was weaker for stroke and not present for AMI.

Cause of death

For deaths within 30 days, the referral diagnosis was given as the underlying or contributing cause for more than 73% of the patients. For deaths during 91–365 days, the proportions were lower – especially for hip fracture. It is well-known that identifying the cause of death may be difficult. Accordingly, including deaths caused by the patient condition or treatment procedures only, may conceal the effect of low quality care resulting in patient death arising from other immediate causes [18]. We therefore recommend inclusion of all-cause deaths.

Transferred patients

For many patients the episode of care includes more than one hospital. Transferral practices can reflect characteristics of the hospitals, as for instance small hospitals sending seriously ill patients to more specialized hospitals for advanced treatment. In addition, some conditions necessitate a rehabilitation period that involves sending patients to another hospital. Our data show high proportions (>50%) of AMI patients sent from small and large hospitals to university hospitals. The likely reason is that advanced treatments (e.g. percutaneous coronary intervention (PCI) or coronary-artery bypass grafting (CABG)) were performed at the university hospitals and at a few of the large hospitals, thus leading to transfer from small hospitals. For stroke and hip fracture, the most frequent transfer was from a large to a small hospital. This may be due to patients admitted to a large hospital for the initial treatment and subsequently transferred to a small hospital for follow-up and rehabilitation. The mean length of stay at the second hospital is considerably longer for stroke and hip fracture patients as compared to the AMI patients. This may indicate the need for a longer follow-up period for stroke and hip fracture patients. Transferred patients may also present more serious condition necessitating a longer period of medical treatment.

In Norway, much effort has been put into centralization of specialized patient treatment and therefore, the transfer rate has increased over the past few years. Including or excluding in-transferred patients has previously been shown to be important for hospitals treating patients with AMI [15, 20, 30]. This may be explained by a high transfer rate (15%). Our data had low transfer rates (<6.6%). We would thus expect larger differences between S30D and W30D when applying newer data for exploring the association between mortality and transfers and their impact on hospital performance measurement.

We are not aware of research that provides a strong theoretical and empirical basis for attributing the outcome for a single patient to several contributing health care providers. If one hospital cares for the patient in a more critical and life-threatening stage it might be tempting to assign the outcome to this hospital only. However, in the perspective of quality surveillance all hospital stays are important. Thus, there should be some sharing of outcome. The weighting approach (W30D) avoids double counting and bias due to omitted hospital admissions. However, there may be various ways of weighting. Consider a patient who receives one-day extensive critical care at a university hospital and is subsequently transferred to a small hospital for nine days follow-up care. Our approach weights the outcome by 0.9 for the small hospital and 0.1 for the university hospital. Conceivably, the weights could have been exchanged, or the hospital providing the most critical care should always be weighted more (0.5 or more?) and the remaining weight distributed among the other hospitals. This would require a detailed break-down of the care process into diagnostic procedures and interventions as well as considerations of the organization of care. A quantitative extension of the qualitative research of e.g. Bosk et al. would be welcome [31]. Our approach to bias reduction has the virtues of simplicity and transparency. In the absence of any theoretical or empirical guidance, we regard our weighting scheme as the least unsatisfactory of the readily available alternatives.

Small hospitals are thought to have larger variation and thus change status compared to larger hospitals when counting the number of deaths in various way [15, 32]. The influence of hospital size on the difference between mortality measures was minor in our data. We found an indication of a difference between the hospital categories when comparing S30D and W30D for the AMI hospitals. This may be due to one university hospital with no local hospital function receiving a large proportion of in-transferred patients from a large number of small hospitals. For hip fracture no outlier hospitals were found by S30D and only 5 out of the 14 high mortality hospitals were detected by IH30D. These results suggest that important variation between hospitals are not identified for mortality measures when including patients treated at one hospital only.

Strengths and limitations

The unique PIN enabled the merging of data from different hospitals and the official registries. Thus, the entire chain of admissions for a patient was accounted for as well as time, place and cause of death. Only 0.85% of the records were excluded because of an invalid PIN, mainly due to patients who are non-permanent residents and thus are assigned a temporary PIN upon hospital admission. Our data covered all Norwegian hospitals and admitted patients for the three medical conditions.

The importance of coding and consequences for hospital ranks and outlier detection has been reported [13]. Variation in diagnostic coding practice may explain differences in mortality between hospitals. Another concern has been that the patient case-mix may be incorrectly represented. Insufficient or absent adjustment for case-mix or even different ways for treating the case-mix in the calculation of mortality, may cause bias in the actual hospital ranking and outlier detection [11, 13, 32]. We have included three case-mix variables that are important for prediction of mortality [11, 32]. The similar profiles for shift in rank for adjusted and unadjusted calculation of W30D, S30D and IH30D indicate little impact of case-mix for the comparison of measures. Extending our calculations to include more case-mix variables, e.g. more medical and socio-economic information, is subject of further research.

Presenting hospital performance by use of ranking lists has been criticized [5, 8]. We found the shift in rank useful for the comparisons of the mortality measures. The change in outlier status confirmed the large variation in hospital performance when using different mortality measures. This demonstrates the importance of how we count for mortality measures.

Conclusions

Mortality measures based on in-hospital deaths alone or measures excluding admissions for transferred patients, can be misleading as indicators of hospital performance. We recommend the use of case-mix adjusted morality based on in-and-out-of-hospital deaths within 30 days. We propose to attributes the outcome to all hospitals by fraction of time spent in each hospital for patients transferred between hospitals to reduce bias due to double counting or exclusion of hospital stays.

References

  1. De Vos M, Graafmans W, Kooistra M, Meijboom B, Van Der Voort P, Westert G: Using quality indicators to improve hospital care: a review of the literature. International Journal for Quality in Health Care. 2009, 21: 119-129. 10.1093/intqhc/mzn059.

    Article  PubMed  Google Scholar 

  2. Mattke S, Epstein AM, Leatherman S: The OECD Health Care Quality Indicators Project: History and background. International Journal for Quality in Health Care. 2006, 18: 1-4.

    Article  PubMed  Google Scholar 

  3. Agency for Healthcare Research and Quality: Guide to Inpatient Quality Indicators: Quality of Care in Hospitals – Volume, Mortality, and Utilization. 2002, Version 3.1(2007) [http://www.qualityindicators.ahrq.gov/Downloads/Modules/IQI/V31/iqi_guide_v31.pdf].

    Google Scholar 

  4. Normand SLT, Shahian DM: Statistical and clinical aspects of hospital outcomes profilling. Stat Sci. 2007, 22: 206-226. 10.1214/088342307000000096.

    Article  Google Scholar 

  5. Goldstein H, Spiegelhalter DJ: League tables and their limitations: Statistical issues in comparisons of institutional performance. Journal of the Royal Statistical Society Series A-Statistics in Society. 1996, 159: 385-409. 10.2307/2983325.

    Article  Google Scholar 

  6. Shahian DM, Wolf RE, Iezzoni LI, Kirle L, Normand SLT: Variability in the Measurement of Hospital-wide Mortality Rates. N Engl J Med. 2010, 363: 2530-2539. 10.1056/NEJMsa1006396.

    Article  CAS  PubMed  Google Scholar 

  7. Jarman B, Gault S, Alves B, Hider A, Dolan S, Cook A, et al: Explaining differences in English hospital death rates using routinely collected data. British Medical Journal. 1999, 318: 1515-1520. 10.1136/bmj.318.7197.1515.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  8. Lilford R, Pronovost P: Using hospital mortality rates to judge hospital performance: a bad idea that just won’t go away. British Medical Journal. 2010, 340: 955-957. 10.1136/bmj.c955.

    Article  Google Scholar 

  9. Borzecki AM, Christiansen CL, Chew P, Loveland S, Rosen AK: Comparison of In-Hospital Versus 30-Day Mortality Assessments for Selected Medical Conditions. Medical Care. 2010, 48: 1117-1121. 10.1097/MLR.0b013e3181ef9d53.

    Article  PubMed  Google Scholar 

  10. Rosenthal GE, Shah A, Way LE, Harper DL: Variations in standardized hospital mortality rates for six common medical diagnoses - Implications for profiling hospital quality. Medical Care. 1998, 36: 955-964. 10.1097/00005650-199807000-00003.

    Article  CAS  PubMed  Google Scholar 

  11. Thomas JW, Hofer TP: Research evidence on the validity of risk-adjusted mortality rate as a measure of hospital quality of care. Medical Care Research and Review. 1998, 55: 371-404. 10.1177/107755879805500401.

    Article  CAS  PubMed  Google Scholar 

  12. Slobbe LCJ, Arah OA, de Bruin A, Westert GP: Mortality in Dutch hospitals: Trends in time, place and cause of death after admission for myocardial infarction and stroke. An observational study. BMC Health Services Research. 2008, 8: 52-10.1186/1472-6963-8-52.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Jollis JG, Romano PS: Sounding board - Pennsylvania’s Focus on Heart Attack - Grading the scorecard. N Engl J Med. 1998, 338: 983-987. 10.1056/NEJM199804023381410.

    Article  CAS  PubMed  Google Scholar 

  14. Iezzoni LI, Ash AS, Shwartz M, Landon BE, Mackiernan YD: Predicting in-hospital deaths from coronary artery bypass graft surgery - Do different severity measures give different predictions?. Medical Care. 1998, 36: 28-39. 10.1097/00005650-199801000-00005.

    Article  CAS  PubMed  Google Scholar 

  15. Kosseim M, Mayo NE, Scott S, Hanley JA, Brophy J, Gagnon B, et al: Ranking hospitals according to acute myocardial infarction mortality - Should transfers be included?. Medical Care. 2006, 44: 664-670. 10.1097/01.mlr.0000215848.87202.c7.

    Article  PubMed  Google Scholar 

  16. Drye EE, Normand SLT, Wang Y, Ross JS, Schreine GC, Han L, Rapp M, Krumholz HM: Comparison of Hospital Risk-Standardized Mortality Rates Calculated by Using In-Hospital and 30-Day Models: An Observational Study With Implications for Hospital Profiling. Ann Intern Med. 2012, 156: 19-26.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Rosenthal GE, Baker DW, Norris DG, Way LE, Harper DL, Snow RJ: Relationships between in-hospital and 30-day standardized hospital mortality: Implications for profiling hospitals. Heal Serv Res. 2000, 34: 1449-1468.

    CAS  Google Scholar 

  18. Johnson ML, Gordon HS, Petersen NJ, Wray NP, Shroyer AL, Grover FL, et al: Effect of definition of mortality on hospital profiles. Medical Care. 2002, 40: 7-16. 10.1097/00005650-200201000-00003.

    Article  PubMed  Google Scholar 

  19. Goldacre MJ, Roberts SE, Yeates D: Mortality after admission to hospital with fractured neck of femur: database study. British Medical Journal. 2002, 325: 868-869. 10.1136/bmj.325.7369.868.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Westfall JM, Kiefe CI, Weissman NW, Goudie A, Centor RM, Williams OD, et al: Does interhospital transfer improve outcome of acute myocardial infarction? A propensity score analysis from the Cardiovascular Cooperative Project. BMC Cardiovasc Disord. 2008, 8: 22-10.1186/1471-2261-8-22.

    Article  PubMed  PubMed Central  Google Scholar 

  21. The Norwegian Knowledge Centre for the Health Services: Methodological development and evaluation of 30-day mortality as a quality indicator for Norwegian hospitals. 2005, 1-198. [http://www.kunnskapssenteret.no/Publikasjoner/Methodological+development+and+evaluation+of+30-day+mortality+as+quality+indicator+for+Norwegian+hospitals.1246.cms]

    Google Scholar 

  22. World Health Organization: International Classification of Diseases (ICD). http://www.who.int/classifications/icd/en/.

  23. Gonnella JS, Louis DZ, Mccord JJ: Staging Concept - Approach to Assessment of Outcome of Ambulatory Care. Medical Care. 1976, 14: 13-21. 10.1097/00005650-197601000-00002.

    Article  CAS  PubMed  Google Scholar 

  24. de Boor C: A Practical Guide to Splines. 2001, New York: Springer

    Google Scholar 

  25. Hosmer WD, Lemeshow S: Interpretation of the Fitted Logistic Regression Model. Applied Lostistic Regression. 2000, New York: John Wiley & Sons Inc, 2

    Chapter  Google Scholar 

  26. Goldacre MJ, Roberts SE, Griffith M: Place, time and certified cause of death in people who die after hospital admission for myocardial infarction or stroke. European Journal of Public Health. 2004, 14: 338-342. 10.1093/eurpub/14.4.338.

    Article  PubMed  Google Scholar 

  27. Vidal EIO, Coeli CM, Pinheiro RS, Camargo KR: Mortality within 1 year after hip fracture surgical repair in the elderly according to postoperative period: a probabilistic record linkage study in Brazil. Osteoporos Int. 2006, 17: 1569-1576. 10.1007/s00198-006-0173-3.

    Article  CAS  PubMed  Google Scholar 

  28. Nielsen KA, Jensen NC, Jensen CM, Thomsen M, Pedersen L, Johnsen SP, et al: Quality of care and 30 day mortality among patients with hip fractures: a nationwide cohort study. BMC Heal Serv Res. 2009, 9: 186-10.1186/1472-6963-9-186.

    Article  Google Scholar 

  29. Ingeman A, Pedersen L, Hundborg HH, Petersen P, Zielke S, Mainz J, et al: Quality of care and mortality among patients with stroke - A nationwide follow-up study. Medical Care. 2008, 46: 63-69. 10.1097/MLR.0b013e3181484b91.

    Article  PubMed  Google Scholar 

  30. Iwashyna TJ, Kahn JM, Hayward RA, Nallamothu BK: Interhospital Transfers Among Medicare Beneficiaries Admitted for Acute Myocardial Infarction at Nonrevascularization Hospitals. Circulation-Cardiovascular Quality and Outcomes. 2010, 3: 468-475. 10.1161/CIRCOUTCOMES.110.957993.

    Article  PubMed  PubMed Central  Google Scholar 

  31. Bosk EA, Veinot T, Iwashyna TJ: Which Patients and Where A Qualitative Study of Patient Transfers from Community Hospitals. Medical Care. 2011, 49: 592-598. 10.1097/MLR.0b013e31820fb71b.

    Article  PubMed  PubMed Central  Google Scholar 

  32. Zaslavsky AM: Statistical issues in reporting quality data: small samples and casemix variation. International Journal for Quality in Health Care. 2001, 13: 481-488. 10.1093/intqhc/13.6.481.

    Article  CAS  PubMed  Google Scholar 

Pre-publication history

Download references

Acknowledgements

The authors thank the hospitals for kindly submitting their data. Tomislav Dimoski developed software necessary for data collection. Saga Høgheim assisted the preparation of the data. Olaf Holmboe and Katrine Damgaard prepared data files used for the analysis.

The work was partly funded by The Norwegian Directorate of Health.

Doris Tove Kristoffersen was supported by a grant from the Research Council of Norway.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Doris T Kristoffersen.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

JCL was project leader for the data collection and the report which formed the basis for the present work. All authors conceived the design and content of this paper. DTK and JH performed the analysis. DTK drafted the first version. All authors contributed to revised versions, read and approved the final manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Kristoffersen, D.T., Helgeland, J., Clench-Aas, J. et al. Comparing hospital mortality – how to count does matter for patients hospitalized for acute myocardial infarction (AMI), stroke and hip fracture. BMC Health Serv Res 12, 364 (2012). https://doi.org/10.1186/1472-6963-12-364

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1472-6963-12-364

Keywords