Skip to main content
  • Research article
  • Open access
  • Published:

Psychometric properties of the patient assessment of chronic illness care measure: acceptability, reliability and validity in United Kingdom patients with long-term conditions

Abstract

Background

The Patient Assessment of Chronic Illness Care (PACIC) is a US measure of chronic illness quality of care, based on the influential Chronic Care Model (CCM). It measures a number of aspects of care, including patient activation; delivery system design and decision support; goal setting and tailoring; problem-solving and contextual counselling; follow-up and coordination. Although there is developing evidence of the utility of the scale, there is little evidence about its performance in the United Kingdom (UK). We present preliminary data on the psychometric performance of the PACIC in a large sample of UK patients with long-term conditions.

Method

We collected PACIC, demographic, clinical and quality of care data from patients with long-term conditions across 38 general practices, as part of a wider longitudinal study. We assess rates of missing data, present descriptive and distributional data, assess internal consistency, and test validity through confirmatory factor analysis, and through associations between PACIC scores, patient characteristics and related measures.

Results

There was evidence that rates of missing data were high on PACIC (9.6% - 15.9%), and higher than on other scales used in the same survey. Most PACIC sub-scales showed reasonable levels of internal consistency (alpha = 0.68 – 0.94), responses did not demonstrate high skewness levels, and floor effects were more frequent (up to 30.4% on the follow up and co-ordination subscale) than ceiling effects (generally <5%). PACIC demonstrated preliminary evidence of validity in terms of measures of long-term condition care. Confirmatory factor analysis suggested that the five factor PACIC structure proposed by the scale developers did not fit the data: reporting separate factor scores may not always be appropriate.

Conclusion

The importance of improving care for long-term conditions means that the development and validation of measures is a priority. The PACIC scale has demonstrated potential utility in this regard, but further assessment is required to assess low levels of completion of the scale, and to explore the performance of the scale in predicting outcomes and assessing the effects of interventions.

Peer Review reports

Background

Improving the quality of care for long-term conditions is an international priority [1, 2], which has led to significant focus on the delivery and evaluation of quality improvement activities such as provider and patient education [1], service redesign [3], use of technology [4], and financial incentives [5]. However, assessing the effects of these initiatives depends on acceptable, reliable and valid measures of quality. Although quality can be assessed from a number of perspectives, there is increasing agreement concerning the importance of the views of the patient [6].

Assessing the patient perspective generally requires self-report measures. To ensure their utility, measures must be subject to a significant programme of research to assess their acceptability to patients and their formal psychometric properties, including their use in contexts and populations different to those in which they were developed. In particular, where innovations in the management of long-term conditions cross national boundaries, measures of quality are needed that perform consistently in different health care settings to both support effective local policy implementation and to allow interpretable comparison of the performance of different health care systems worldwide.

The Patient Assessment of Chronic Illness Care (PACIC) is a United States (US) measure of quality of care for patients with a chronic illness [7]. The original PACIC includes 20 items and measures specific actions or qualities of care, based on the influential Chronic Care Model (CCM). The PACIC is designed around five subscales: (a) patient activation (b) delivery system design and decision support (c) goal setting and tailoring (d) problem-solving and contextual counselling (e) follow-up and coordination.

Although the scale was only published in 2005, the influence of the Chronic Care Model means that there is already a reasonable evidence base on the performance of the scale (see Table 1). The scale seems to be largely acceptable to patients with long-term conditions, with low levels of missing data [7–10], although some studies demonstrate skew and related floor and ceiling effects [11, 12]. Most assessments suggest acceptable levels of internal consistency [7, 9, 11, 13–15] and test-retest reliability [7, 13, 15]. Although the scale is based on a five factor conceptual model, there is less consensus over the degree to which responses reflect this structure [9, 11], with studies suggesting two factor or unidimensional structures may be a better fit to the data [8, 9, 14].

Table 1 Summary of published validity data on the PACIC

Validating scales such as the PACIC is a complex process. Although convergent validity with related scales (such as other patient self-report measures of quality) is useful [7, 8], construct validity is difficult because it is not clear exactly how factors such as age, sex, socioeconomic status and multimorbidity should relate to PACIC scores. Studies have demonstrated predicted relationships with measures of self management behaviour [10, 12] and self rated health [11, 14]. Studies relating PACIC to ‘harder’ outcomes such as clinical parameters have generally had less success [9, 13]. There are few prospective studies of the ability of PACIC to predict outcomes over time.

The published literature on the PACIC includes studies from the US [7, 9, 10, 12, 13, 15, 18], Canada [23], Denmark [11], Germany [8, 19, 21], Holland [22], Australia [14] and New Zealand [16]. Many findings are consistent across health systems and populations. However, at the time of writing there is little evidence about the performance of PACIC in the United Kingdom (UK), despite the major initiatives (such as the Quality and Outcomes Framework) which have been implemented in this setting to improve care for long-term conditions.

We present preliminary data on the psychometric performance of the PACIC in a large sample of UK patients with long-term conditions, in terms of acceptability, reliability and validity. For acceptability, we explored rates of missing data and compared them with rates found in the international literature. In terms of reliability, we assessed internal consistency at the scale and subscale level. In terms of validity, we explored floor and ceiling effects, factor structure, and associations between the PACIC and other care quality outcomes measures to test predicted relationships.

Methods

Data were collected as part of a wider longitudinal cohort designed to assess the impact of ‘care planning’ and written ‘care plans’ on patient outcomes [24]. We identified patients on clinical registers with long-term conditions in practices with high levels of ‘care plans’ as reported in the General Practice Patient Survey [25], and recruited comparable patients in similar practices reporting lower levels of written care plans. The study was not designed to provide population estimates, but to create patient groups differing in rates of ‘care plans’ but similar in all other characteristics. However, the sample should function for assessment of psychometric characteristics and associations between variables. The current analysis uses baseline data from the cohort. The following measures were used.

PACIC

As noted previously, the original version of PACIC used in the study includes 20 items based around five subscales: patient activation; delivery system design; goal setting; problem-solving and contextual counselling; and follow-up and co-ordination. Each item is rated on a five point scale (from ‘almost never’ to ‘almost always’) and subscale and total scores are based on average scores across items [7], with higher scores indicating higher quality of care. Item content is shown in Table 2. The scale was used without any major adaptation for a UK population, although ‘chronic condition’ was changed to ‘long-term condition’ as this is the more usual term used in the UK.

Table 2 Descriptive data on items and scales

Demographic and clinical characteristics

We measured socio-demographic variables (age, gender, work, and education). We asked patients to self report long-term conditions from a list (including high blood pressure, chest complaints, diabetes, heart problems, chronic kidney disease, stroke, cancer, anxiety and depression, arthritis, stomach or bowel problems, skin conditions, vision or hearing problems, neurological problems, chronic fatigue, thyroid or other problems). Patients also reported the professional they consulted with most frequently for their long-term conditions (GP, practice nurse or other, including community nurse, hospital doctor or hospital nurse) and the number of primary care consultations in the last six months.

Measures of quality of care

(a) Shared decision making

We measured shared decision making using the Health Care Climate Questionnaire (HCCQ) measure [26, 27]. The scale assesses patients’ perceptions of the degree to which their health professional is ‘autonomy supportive’ as opposed to ‘controlling’ when providing health care. Each item is scored on a 7-point scale ranging from ‘strongly disagree’ to ‘strongly agree’. We used the short form with 6 items, with an alpha of 0.8. Scale scores were recoded 0–100 for descriptive analysis, although there was evidence of significant skew.

(b) Quality of care for long-term conditions

We used a six item scale used in quality improvement activities in the UK which assess quality of care for long-term conditions with items relating to communication, patient involvement, information, support, co-ordination of care, and self-efficacy (QIPP scale). Each item is scored on a 4 point scale, with a range of scale anchors, and the item scores are averaged to create an overall score.

(c) Satisfaction with primary care

We assessed satisfaction with primary care with a single item 5 point scale (rated from ‘very dissatisfied’ to ’very satisfied’) [25]. Satisfaction data were very highly skewed.

Analysis

(a) Acceptability

The PACIC scale did not go through any form of translation from US to UK populations. One indicator of the acceptability of a measure is to look at how well the items are completed. We assessed acceptability for the UK population by looking at completion rates and the extent of missing data. We computed missing data rates for items, sub-scales and overall score. There are no published guidelines for dealing with missing values on PACIC, so we adopted the arbitrary criterion that respondents must have completed at least 60% of the items on a subscale or the total scale to be included in analyses.

(b) Reliability

As with many scales, multiple items are used to measure PACIC subscales, on the basis that several observations will lead to a more reliable measure. This is based on the assumption that items within a subscale are homogenous. We assessed the internal consistency of the PACIC by calculating Cronbach’s alpha for the full PACIC scale and for each subscale.

(c) Validity

We calculated the proportions of patients scoring at floor and ceiling for subscales and the overall scale, and explored the distribution of subscale and overall scores.

Confirmatory factor analysis was used to test the hypothesised factor structure of the PACIC: a five latent-factor model of quality of care in which all latent factors were allowed to covary with one another [9]. Structural equation modelling, using AMOS (version 16.0), was used to fit and test the factor structure. We conducted two analyses. The first was a ‘complete cases’ analysis using only those respondents with full data on all 20 items; the second adopted a less restrictive criterion and included patients with missing data on three or fewer of the 20 items and no more than 50% of items missing on any of the five scales. STATA’s method of multivariate normal regression was used to impute ratings for these cases. As the imputed data was non-integer and, in a few cases, outside the item scoring range, it was first rounded to the nearest integer and then recoded, where necessary, to the appropriate ‘anchor’ point. The method of maximum likelihood was adopted for parameter estimation: asymptotically distribution-free estimation was employed as a sensitivity analysis.

The published evidence was mixed with respect to likely associations with demographic characteristics (see Table 1). We made no specific hypotheses, but report differences in the scores of different groups using linear regression (in Stata version 11.0), taking account of the within-practice clustering. Due to skewness in the distribution of the overall PACIC scores, standard errors were calculated using a bootstrap method, free from parametric assumptions, using 10,000 bootstrap samples.

To assess construct validity, we hypothesised significant associations with measures of shared decision-making, quality of care and satisfaction with primary care services (Table 1). We assessed these relationships using Spearman non-parametric correlations, in view of the skewed distributions in the measures.

Results

Responses were received from 2551 respondents (41%), although a small number with missing age and sex were removed for analysis, as were those who self reported no long-term conditions (despite being on a clinical register), leaving 2439 potential respondents for analysis (40%). Demographic and clinical data on respondents are provided in Table 3.

Table 3 Descriptive data on the study sample

Acceptability

Missing data rates for the PACIC items were high (Table 2), ranging from 9.6% to 15.9% at an item level. Between 11.2% and 15.7% of subscales could not be calculated because less than 60% of items were completed and 14.6% of patients were missing a total score. Ceiling effects were generally under 5%, although significant proportions of patients scored at the floor for patient activation (20.9%), goal setting (14.2%), problem solving (14.7%) and follow-up and co-ordination (30.4%).

Descriptives

The total PACIC score showed a reasonable distribution of scores, with some positive skew. Most of the subscales were also positively skewed, most notably the goal setting and follow up subscales. The mean overall PACIC score was 2.4 (SD 0.87) with subscale means of patient activation (2.5), delivery system design (3.1); goal setting (2.2); problem solving (2.5); and follow-up and co-ordination (1.9). The distribution of PACIC scores demonstrated more symmetry and less ceiling effects than the QIPP, HCCQ and satisfaction scores. Importantly, the distribution in the PACIC scores means the scale has much higher capacity to reflect positive changes in individual scores than the latter scales (see Additional files 1, 2, 3, 4, 5, 6, 7, 8 and 9).

The intracluster correlation coefficient (ICC) for the total PACIC score was 0.040 (i.e. only 4% of the total variation in PACIC scores was due to differences in practice means, with the remaining 96% resulting from differences between patients) with subscale ICCs ranging from 0.029 to −0.042.

Reliability

Alpha reliabilities were as follows: patient activation (0.86, 3 items); delivery system design (0.68, 3 items); goal setting (0.82, 5 items); problem solving (0.86, 4 items); follow-up and co-ordination (0.82, 5 items); PACIC total (0.94, 20 items).

Validity (structure)

The complete case analysis of the hypothesised PACIC factor structure utilised 75.7% of the sample (n = 1846). The model did not fit the data well according to most indices of fit (actual indices and conventional levels of ‘good’ fit are presented in Table 4). Although the Standardised Root Mean-Squared Residual indicated that on average, observed and predicted item variances and covariances were not too dissimilar, this masks a number of large differences on specific covariance terms. Inter-factor correlations were also generally high, ranging from 0.60 to 0.97 (between delivery system design and goal setting).

Table 4 Fit indices for the confirmatory factor analysis

Using the less restrictive criteria a further subset of 194 patients (8%) with some missing data were added into the analysis, but the overall results in terms of indices of fit were similar (Table 4).

Validity (construct)

The high inter-correlations between PACIC subscales and the failure to confirm a five factor structure meant that analyses of construct validity focused on PACIC total scores only. Initial analysis explored associations with demographic characteristics. Females and patients aged 75 or more scored significantly lower in total scores (regression co-efficient 0.18 and −0.20 respectively). The impact of increasing numbers of conditions and greater contact with a general practitioner was inconsistent. There was no association between scores and the professional most responsible for care of the long-term condition. All these relationships accounted for around 1% of the variance in PACIC scores (Table 5).

Table 5 Associations between PACIC scores and demographic variables

In terms of construct validity (Table 6), PACIC total scores were significantly associated with the single item measure of patient satisfaction with primary care (Spearman’s correlation 0.24) and demonstrated higher correlations with shared decision-making (Spearman’s correlation 0.47) and quality of care (Spearman’s correlation 0.54).

Table 6 Associations with other self-reported measures of care

The results were not markedly different when the analyses were rerun on the imputed data set (N = 1973).

Discussion

As health policy makers focus on the challenges of care for long-term conditions, significant funding is being channelled towards quality improvement in care delivery, through changes to skill mix, staff training, new technologies, and financial incentives. The success of these quality improvement efforts are in part dependent on effective measures to track current standards and assess the effectiveness of interventions. Such measures can also ensure that policy and clinical interventions are perceived by patients to be making improvements to care. This study represented a preliminary test of the utility of the PACIC for this purpose in the UK.

Summary of the results

Scores on the PACIC showed some skew, but were generally reasonably well distributed, with few scales showing the high levels of skew that that are sometimes evident on other patient-reported measures of primary care. However, the amount of missing data at the item, subscale and overall levels was relatively high. This was higher than comparable PACIC studies in the literature, where rates (when reported) were around 3–5% [7–9, 11]. It was also higher than rates found in scales in the same survey – for example, the shorter QIPP had missing data rates of 3–5% at an item level.

Limitations of the study

As noted earlier, the study was designed as a longitudinal study to assess the potential benefits of care plans. The sample was not designed as a random sample of patients with long-term conditions, and the response rate, while in line with other published studies [28], does mean that the sample cannot be considered representative. It should function as a sample for preliminary assessment of the performance of the scale, although selective non-response (i.e. among more severely ill patients) may restrict range on some variables, which could in turn impact on estimated associations. Furthermore, care must be taken in interpreting descriptive data such as mean scores.

We did not have data to estimate some important aspects of reliability and validity, including test-retest reliability, criterion validity (as there is no accepted ‘gold standard’) or responsiveness to change. Our assessment of acceptability was limited to missing item rates, and did not explore other aspects, such as patient views of the scale, time to complete the scale, or cultural acceptability [29].

Interpretation of the results

It is not clear why non-completion rates were so much higher than comparable studies, and there are a lack of data to provide comparisons of patient characteristics (such as education and health literacy) in the current sample which might explain these high rates. Examination of the item content of the PACIC might suggest that some phrases (such as ‘nutritionist’) may be unfamiliar to some patients, and others (such as ‘hard times’) may be interpreted differently between UK and other populations. Informal discussions took place with some patients during administration of the survey, and those discussions suggested that some items on PACIC may make assumptions about existing care in the UK which may be inappropriate. For example, question 1 is ‘Asked about my ideas when we made a treatment plan’, and question 8 asks patients if they were ‘given a copy of my treatment plan’. The items represent reports of care, but the response options do not offer a ‘not relevant’ option, hence it is possible that some ‘missing’ responses reflect patients who felt that the question was irrelevant to their current care, rather than just representing activities that were infrequent, as evidence suggests that written treatment plans are not a consistent part of care for long-term conditions in the UK [24]. The current response format may be causing missing data which in fact reflects meaningful responses. It has been suggested that response scales may reasonably be modified to suit local context and this might improve performance of the scale in the UK, although this will be at some potential cost in comparability across studies [30]. This issue requires further investigation and possibly cognitive testing and other qualitative methods to make the scale more suitable for the UK population.

If the scores of respondents can be considered meaningful, it is interesting that the scores on PACIC in the UK are relatively low. Some scales did show a high prevalence of scores at the floor, and the mean scores were generally lower than those reported in the wider literature. For example, mean PACIC total score was 2.4, compared to 2.6 in patients in US primary care [7], 3.3 in depressed primary care patients in Germany [8], 2.7 in patients with osteoarthritis in German primary care [19], 3.0 in patients with CHD, hypertension or diabetes in Australian general practice [14] and 3.2 in Hispanics with diabetes in hospital ambulatory settings in the US [15]. Patient activation, follow-up and co-ordination and problem solving were particularly low in the current sample. Of course, there are a lack of data on calibration of PACIC against other measures which would allow judgements of the clinical or policy significance of such differences, even if they were statistically significant. However, the low scores may seem surprising given the importance placed on structured delivery of care for long-term conditions through the Quality and Outcomes Framework, which has seen changes to skill mix, and increased use of information technology and protocols for monitoring patients and delivering standardised care in line with the Chronic Care Model [31]. Recent evidence suggests that patients with complex care needs in the UK rate their experience of a ‘patient-centered medical home’ (characterised by high access, professionals who know their medical history, and care coordination) higher than those in other countries [32]. However, there is evidence that the content of care has changed, with an increased focus on biomedicine and less on self management and psychosocial issues [33–35], and it is possible that the scores reflect that.

Generally, the PACIC subscales showed appropriate levels of internal reliability. We did not set an a priori criterion for reliability prior to the analysis, although our implicit assumption was that they should be between 0.7 and 0.9, in line with published convention [29]. Cronbach’s alpha for Delivery system design was lower than for the other subscales (0.68 vs. >0.80). It should be noted that this pattern is consistent with data from other studies where reported [7, 8, 22] and as these other studies are from the USA, Germany and Holland, the lower reliability seems unlikely to reflect the UK health service.

As indicated in Table 1, studies have reported variable relationships with demographic and clinical variables. We found lower PACIC scores in females, while the bulk of studies report non-significant relationships [8, 11, 12, 15], although that may reflect the higher level of power in the current analysis as the proportion of variance accounted for by gender was trivial. The same patterns were in evidence for relationships with age [8, 11, 12, 15, 20] and number of conditions [7, 8, 11, 12, 15].

In terms of validity, the PACIC showed the hypothesised associations with shared decision-making and assessments of quality of care and patient satisfaction. Global measures of satisfaction generally reflect patient assessments of interpersonal care, and it appears that PACIC is not simply reflecting the quality of the doctor-patient relationship or patients’ liking for their doctor, as the associations are relatively low. The different distributions of scores indicate that PACIC has the potential to add value to the assessment of practice and professional performance.

The factor analysis suggested that the five factor structure was not supported by the data. Although further analysis might formally test alternative models of the relationships between items, calculating total PACIC scores based on all 20 items might be the most appropriate scoring method. It should be noted that maximum likelihood estimation is not considered to be the best method for use with ordinal data [36], as it was developed for continuous variables with a joint multivariate normal distribution. However, the large sample size, coupled with the knowledge that we are following applied measurement practice for this instrument (i.e. item scores are simply summed to form subscale and overall scores) justify its use here.

Some previous analyses have supported the five factor structure [7, 20, 22], although technical aspects of these analyses have been criticised [30]. Of course, as this is the first published assessment of the PACIC scale in the UK, the failure to confirm the factor structure may reflect characteristics of the service context and the patient population, such as the gap between the assumptions inherent in PACIC items and the experience of patients that was raised in the discussion of missing data above. If patient experience of care for long-term conditions is not effectively reflected in the PACIC items, a clear factor structure may be less likely to emerge.

More fundamentally, the appropriateness of factor analysis (and internal reliability estimates) has been questioned. Underlying these techniques is the assumption that responses to individual items are caused by an underlying construct [37]. Patients reporting inconsistent patterns across related items may not reflect instrument problems, but inconsistency in their experience of separate and distinct aspects of care. If this is the case, conventional assessments of factor structure and internal reliability may be less useful [30].

Although data are available in the baseline cohort, we have not reported associations between PACIC and patient health behaviour and health outcomes. We do not feel that these are correctly conceptualised as measures of validity for a single scale: rather, the association between quality of care and patient outcomes (and the importance of care quality compared to other drivers such as demography, socio-economic status and self-management behaviour) is a core empirical question for health services research and delivery [38]. The priority is to explore whether quality of care predicts outcomes over time, where evidence is far more limited. Our longitudinal survey is designed to allow this to be estimated prospectively and we will publish data in due course.

Conclusions

In summary, the study suggests that the use of PACIC may lead to relatively high levels of missing data among UK patients, although the reasons for that would benefit from further research. However, our analyses suggest reasonable levels of reliability and validity. The instrument also demonstrates a more symmetrical distribution than most patient-reported measures and a higher capacity to capture positive change, giving the scale (and the modified version currently proposed) considerable potential as a measure of the delivery of core components of care for long-term conditions in the UK.

References

  1. Wagner E: Chronic disease management: What will it take to improve care for chronic illness?. Effective Clinical Practice. 1998, 1: 2-4.

    CAS  PubMed  Google Scholar 

  2. Department of Health: Supporting people with long term conditions: An NHS and social care model to support local innovation and integration. 2005, London

    Google Scholar 

  3. Kennedy A, Rogers A, Bower P: Support for self care for patients with chronic disease. BMJ. 2007, 335: 968-970. 10.1136/bmj.39372.540903.94.

    Article  PubMed  PubMed Central  Google Scholar 

  4. Murray E, Burns J, See Tai S, Lai R, Nazareth I: Interactive Health Communication Applications for people with chronic disease. Cochrane Database of Systematic Reviews. 2005, 10.1002/14651858. Issue 4. Art. No. CD004274

    Google Scholar 

  5. Campbell S, Reeves D, Kontopantelis E, Sibbald B, Roland M: Effects of pay for performance on the quality of primary care in England. N Engl J Med. 2009, 361: 368-378. 10.1056/NEJMsa0807651.

    Article  CAS  PubMed  Google Scholar 

  6. Campbell S, Roland M, Buetow S: Defining quality of care. Soc Sci Med. 2000, 51: 1611-1625. 10.1016/S0277-9536(00)00057-5.

    Article  CAS  PubMed  Google Scholar 

  7. Glasgow R, Wagner E, Schaefer J, Mahoney L, Reid R, Greene S: Development and validation of the Patient Assessment of Chronic Illness Care. Med Care. 2005, 43: 436-444. 10.1097/01.mlr.0000160375.47920.8c.

    Article  PubMed  Google Scholar 

  8. Gensichen J, Serras A, Paulitsch M, Rosemann T, König J, Gerlach F, Petersen J: The Patient Assessment of Chronic Illness Care questionnaire: evaluation in patients with mental disorders in primary care. Community Ment Health J. 2011, 47: 453.

    Article  Google Scholar 

  9. Gugiu C, Coryn C, Applegate B: Structure and measurement properties of the Patient Assessment of Chronic Illness Care instrument. J Eval Clin Pract. 2010, 16: 509-516.

    PubMed  Google Scholar 

  10. Schmittdiel J, Mosen D, Glasgow R, Hibbard J, Remmers C, Bellows J: Patient assessment of chronic illness care (PACIC) and improved patient-centered outcomes for chronic conditions. J Gen Intern Med. 2011, 23: 77-80.

    Article  Google Scholar 

  11. Maindal H, Sokolowski I, Vedsted P: Adaptation, data quality and confirmatory factor analysis of the Danish version of the PACIC questionnaire. Eur J Public Health. 2010

    Google Scholar 

  12. Glasgow R, Whitesides H, Nelson C, King D: Use of the Patient Assessment of Chronic Illness Care (PACIC) with diabetic patients. Diabet Care. 2005, 28: 2655-2661. 10.2337/diacare.28.11.2655.

    Article  Google Scholar 

  13. Gugiu P, Coryn C, Clark R, Kuehn A: Development and evaluation of the short version of the Patient Assessment of Chronic Illness Care instrument. Chronic Illness. 2009, 5: 268-276. 10.1177/1742395309348072.

    Article  PubMed  Google Scholar 

  14. Taggart J, Chan B, Jayasinghe U, Christl B, Proudfoot J, Crookes P, Beilby J, Black D, Harris M: Patients Assessment of Chronic Illness Care (PACIC) in two Australian studies: structure and utility. J Eval Clin Pract. 2011, 17: 215-221. 10.1111/j.1365-2753.2010.01423.x.

    Article  PubMed  Google Scholar 

  15. Aragones A, Schaefer E, Stevens D, Gourevitch M, Glasgow R, Shah N: Validation of the Spanish translation of the Patient Assessment of Chronic Illness Care (PACIC) survey. Preventing Chronic Disease. 2008, 5: 1-10.

    Google Scholar 

  16. Carryer J, Budge C, Hansen C, Gibbs K: Providing and receiving self-management support for chronic illness: Patients' and health practitioners' assessments. Journal of Primary Health Care. 2010, 2: 124-129.

    PubMed  Google Scholar 

  17. Goetz K, Freund T, Gensichen J, Miksch A, Szecsenyi J, Steinhauser J: Adaptation and psychometric properties of the PACIC Short Form. Am J Manag Care. 2012, 18: e55-e60.

    PubMed  Google Scholar 

  18. Jackson G, Weinberger M, Hamilton N, Edelman D: Racial/ethnic and educational-level differences in diabetes care experiences in primary care. Primary Care Diabetes. 2008, 2: 39-44. 10.1016/j.pcd.2007.11.002.

    Article  PubMed  Google Scholar 

  19. Rosemann T, Laux G, Szecsenyi J, Grol R: The Chronic Care Model: congruency and predictors among primary care patients with osteoarthritis. Qual Saf Health Care. 2008, 17: 442-446. 10.1136/qshc.2007.022822.

    Article  CAS  PubMed  Google Scholar 

  20. Rosemann T, Laux G, Droesemeyer S, Gensichen J, Szecsenyi J: Evaluation of a culturally adapted German version of the Patient Assessment of Chronic Illness Care (PACIC 5A) questionnaire in a sample of osteoarthritis patients. J Eval Clin Pract. 2007, 13: 806-813. 10.1111/j.1365-2753.2007.00786.x.

    Article  PubMed  Google Scholar 

  21. Szecsenyi J, Rosemann T, Joos S, Peters-Klimm F, Miksch A: German diabetes disease management programs are appropriate for restructuring care according to the chronic care model. Diabet Care. 2011, 31: 1150-1154.

    Article  Google Scholar 

  22. Wensing M, van Lieshout J, Jung H, Hermsen J, Rosemann T: The Patient Assessment Chronic Illness Care (PACIC) questionnaire in The Netherlands: a validation study in rural general practice. BMC Health Services Research. 2008, 8: 182-10.1186/1472-6963-8-182.

    Article  PubMed  PubMed Central  Google Scholar 

  23. McIntosh C: Examining the factorial validity of selected modules from the Canadian Survey of Experiences with Primary Health Care (). http://www.statcan.ca Ottowa; 2011

  24. Burt J, Roland M, Paddison C, Reeves D, Campbell J, Abel G, Bower P: Use and benefits of care plans and care planning for people with long-term conditions in England. J Health Serv Res Policy. 2012, 17: 64-71. 10.1258/jhsrp.2011.010172.

    Article  PubMed  Google Scholar 

  25. Campbell J, Smith P, Nissen S, Bower P, Elliott M, Roland M: The GP Patient Survey for use in primary care in the National Health Service in the UK - development and psychometric characteristics. BMC Family Practice. 2009, 10.

    Google Scholar 

  26. Williams G, McGregor H, Zeldman A, Freedman Z, Deci E: Testing a self-determination theory process model for promoting glycemic control through diabetes self-management. Health Psychol. 2004, 23: 58-66.

    Article  PubMed  Google Scholar 

  27. Williams G, McGregor H, Kind D, Nelson C, Glasgow R: Variation in perceived competence, glycemic control, and patient satisfaction: relationship to autonomy support from physicians. Pat Educ Couns. 2005, 57: 39-45. 10.1016/j.pec.2004.04.001.

    Article  Google Scholar 

  28. Roland M, Elliott M, Lyratzopoulos G, Barbiere J, Parker R, Smith P, Bower P, Campbell J: Reliability of patient responses in pay for performance schemes: analysis of national General Practitioner Patient Survey data in England. BMJ. 2009, 339: b3851-10.1136/bmj.b3851.

    Article  PubMed  PubMed Central  Google Scholar 

  29. Fitzpatrick R, Davey C, Buxton M, Jones D: Evaluating patient-based outcome measures for use in clinical trials. Health Technol Assess. 1998, 2 (14).

  30. Spicer J, Budge C, Carryer J: Taking the PACIC back to basics: the structure of the Patient Assessment of Chronic Illness Care. J Eval Clin Pract. 2010, 6.

    Google Scholar 

  31. Maisey S, Steel N, Marsh R, Gillam S, Fleetcroft R, Howe A: Effects of payment for performance in primary care: qualitative interview study. J Health Serv Res Policy. 2008, 13: 133-139. 10.1258/jhsrp.2008.007118.

    Article  PubMed  Google Scholar 

  32. Schoen C, Osborn R, Squires D, Doty M, Pierson R, Applebaum S: New 2011 survey of patients with complex care needs in eleven countries finds that care is often poorly coordinated. Health Aff. 2012, 30: 2437-2448.

    Article  Google Scholar 

  33. Checkland K, Harrison S, McDonald R, Grant S, Campbell S, Guthrie B: Biomedicine, holism and general medical practice: responses to the 2004 General Practitioner contract. Sociology of Health and Illness. 2008, 30: 788-803. 10.1111/j.1467-9566.2008.01081.x.

    Article  PubMed  Google Scholar 

  34. Blakeman T, MacDonald W, Bower P, Gately C, Chew-Graham C: A qualitative study of GPs' attitudes to self-management of chronic disease. Br J Gen Pract. 2006, 56: 407-414.

    PubMed  PubMed Central  Google Scholar 

  35. MacDonald W, Rogers A, Blakeman T, Bower P: Practice nurses and the facilitation of self-management in primary care. J Adv Nurs. 2008, 62: 191-199. 10.1111/j.1365-2648.2007.04585.x.

    Article  PubMed  Google Scholar 

  36. Flora D, Curran P: An empirical evaluation of alternative methods of estimation for confirmatory factor analysis with ordinal data. Psychological Methods. 2004, 9: 466-491.

    Article  PubMed  PubMed Central  Google Scholar 

  37. Bollen K, Lennox R: Conventional wisdom on measurement: a structural equation perspective. Psychol Bull. 1991, 110: 305-314.

    Article  Google Scholar 

  38. Kahn K, Tisnado D, Adams J, Liu H, Chen W, Hu F, Mangione C, Hays R, Danberg C: Does ambulatory process of care predict health-related quality of life outcomes for patients with chronic disease?. Health Serv Res. 2007, 42: 63-83. 10.1111/j.1475-6773.2006.00604.x.

    Article  PubMed  PubMed Central  Google Scholar 

Pre-publication history

Download references

Acknowledgements

This paper is based on research commissioned and funded by the Policy Research Programme in the Department of Health. The views expressed are not necessarily those of the Department of Health.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Peter Bower.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

PB, MR, BS and DR are applicants on the grant. JR and KR conducted the survey, with PB. MH conducted the analyses. PB and JR wrote the paper and all other authors commented on earlier drafts. All authors read and approved the final manuscript.

Electronic supplementary material

Authors’ original submitted files for images

Rights and permissions

Open Access This article is published under license to BioMed Central Ltd. This is an Open Access article is distributed under the terms of the Creative Commons Attribution License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Rick, J., Rowe, K., Hann, M. et al. Psychometric properties of the patient assessment of chronic illness care measure: acceptability, reliability and validity in United Kingdom patients with long-term conditions. BMC Health Serv Res 12, 293 (2012). https://doi.org/10.1186/1472-6963-12-293

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1472-6963-12-293

Keywords