Email updates

Keep up to date with the latest news and content from BMC Psychiatry and BioMed Central.

Open Access Research article

An approach to measure compliance to clinical guidelines in psychiatric care

Tord Forsner1*, Anna Åberg Wistedt2, Mats Brommels34 and Yvonne Forsell1

Author Affiliations

1 Department of Public Health Sciences, Karolinska Institutet, Stockholm, SE-171 76, Sweden

2 Department of Clinical Neuroscience, Section of Psychiatry St Göran's Hospital, Karolinska Institutet, Stockholm, SE-112 81, Sweden

3 Medical Management Centre, Department of Learning, Informatics, Management and Ethics, Karolinska Institutet, Stockholm, SE-171 77 Sweden

4 Department of Public Health, University of Helsinki, Helsinki, Finland

For all author emails, please log on.

BMC Psychiatry 2008, 8:64  doi:10.1186/1471-244X-8-64

The electronic version of this article is the complete one and can be found online at: http://www.biomedcentral.com/1471-244X/8/64


Received:18 January 2008
Accepted:25 July 2008
Published:25 July 2008

© 2008 Forsner et al; licensee BioMed Central Ltd.

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Background

The aim of this study was to measure six months compliance to Swedish clinical guidelines in psychiatric care after an active supported implementation process, using structured measures derived from the guidelines.

Methods

In this observational study four psychiatric clinics each participated in active implementation of the clinical guidelines for the assessment and treatment of depression and guidelines for assessment and treatment of patients with suicidal behaviours developed by The Stockholm Medical Advisory Board for Psychiatry. The implementation programme included seminars, local implementation teams, regular feedback and academic visits. Additionally two clinics only received the guidelines and served as controls. Compliance to guidelines was measured using indicators, which operationalised requirements of preferred clinical practice. 725 patient records were included, 365 before the implementation and 360 six months after.

Results

Analyses of indicators registered showed that the actively implementing clinics significantly improved their compliance to the guidelines. The total score differed significantly between implementation clinics and control clinics for management of depression (mean scores 9.5 (1.3) versus 5.0 (1.5), p < 0.001) as well as for the management of suicide (mean scores 8.1 (2.3) versus 4.5 (1.9), p < 0.001). No changes were found in the control clinics and only one of the OR was significant.

Conclusion

Compliance to clinical guidelines measured by process indicators of required clinical practice was enhanced by an active implementation.

Background

Interest in evidence-based medicine (EBM) has grown exponentially. The focus has been on helping clinicians, patients and policy maker to use the best scientific evidence in their decision-making [1]. Several studies stress the difficulties of implementing EBM and the challenges of achieving performance change in health care [2-4].

Psychiatric care is changing more rapidly than other areas of medicine [5]. Variations in the quality of psychiatric care, for example in depression treatment, have been described in psychiatric practice, including gaps between clinical practice and evidence-based guideline recommendations [6,7]. Clinical guidelines are useful tools to increase the use of EBM and to implement research findings [8-10]. Successfully implementing clinical guidelines and changing practices to reflect current evidence is an important goal and efficient tools are needed to achieve that goal [11]. There is also a need of continuous feedback and formal evaluation of guidelines compliance. One way of evaluating guidelines implementation is to use indicators derived from the clinical guidelines as measures. When those represent best practice as prescribed by the literature, or the consensus view of experts, it can be argued that they also are process measures of the quality of care. As such, guidelines based indicators are a powerful tool of monitoring care and identifying areas of clinical care needing improvement [12]. A recent systematic review by Weinmann et al. [13] emphasised that there is only a small number of implementation studies of psychiatric guidelines. The effects of the guidelines and role of involvement on improvement were reported to be moderate or limited. Complex multifaceted interventions, specific psychological methods, feedback and ongoing support were associated with a more positive outcome [13].

Health care is an important part of the Swedish welfare system and the health service act states that all citizens should have equal access to health care services, regardless of where they live or their financial situation. Regional health authorities, the County Councils, own and operate nearly all hospital and primary care. In the Stockholm County, representatives of public purchasers and providers meet in the Stockholm Medical Advisory Board in order to develop regional clinical guidelines. The shared aim is to provide high quality care on equal terms for all county citizens [14,15]. The Stockholm Medical Advisory Board for Psychiatry in the medical advisory organisation has developed clinical guidelines for different psychiatric disorders with the intention to require those to be implemented in all psychiatric clinics in the Stockholm County. After the publication of the clinical guideline for depressive disorders in 2003, a pilot study was conducted in order to monitor the implementation. An implementation programme was initiated based on the regular assessment of outcome and process quality parameters, defined as above, repeated benchmarking and feedback mechanisms. The implementation process was introduced and found feasible. This study reports the findings of a quasi-experiment involving observations before-and-after the active implementation of clinical guidelines for the care of depression patients and suicidal patients with a comparison group. Guidelines compliance was assessed by identifying predefined measures of requirements of good practice in patient records.

Methods

The hypothesis to be tested were: the overall depression care quality score is higher for patients treated in implementation clinics than in control clinics and the overall suicidality care quality score is higher for patients treated in implementation clinics than in control clinics.

Participating clinics

There were six psychiatric departments in Stockholm County at the time of the study. Before the study started all heads of psychiatric departments were invited and participated in an implementation conference in Stockholm. Four departments decided to participate in the study, two in the implementation of guidelines for depression and two for the guidelines for suicidal behaviours. Six psychiatric clinics from the four departments were included. Two of the clinics only received the guidelines and served as controls. The duration of the study was six months.

Implementation process at the four intervention clinics

The implementation started with a series of seminars based on the clinical guidelines that engaged all staff members. The psychiatrists who had taken part in the design of the guidelines led the seminars. At each of the four clinics local multidisciplinary teams of nurses, physicians, counsellors and psychologists were established. Two of the clinics implemented the clinical guidelines for assessment and treatment of depression and two clinics implemented the guidelines for assessment and treatment of patient with suicidal behaviours. The teams set local goals and identified needs of education and training after analyzing the gap between clinical guidelines and ongoing practice. During the process these goals were evaluated at regular meetings and the clinical guidelines were refined. A number of indicators operationalising the guidelines' requirements on good clinical practice were defined. The presence of those indicators in the patient records were documented and served as measures of guidelines adherence. An internal benchmarking using the registered data was performed at regular intervals. The clinics reported their results and they were compared to averages of the other three intervention clinics. The first author made regular visits to the clinics during the six months period. In summary the implementation programme included multifaceted interventions, seminars, local implementation teams, regular feedback and academic visits.

Implementation at the two control clinics

The guidelines were also distributed to the comparison clinics, but no seminars were conducted and no local teams were established.

Included medical records

Patient records from adult men and women, who had an ICD-10 or DSM-IV diagnose of depression were eligible for inclusion in the study on the implementation of the clinical guidelines for depression [16,17]. Patient records were randomly selected. From the two intervention clinics 122 records were selected before and 121 records six months after the start of the implementation process. At the control clinics 61 records were selected before and 60 records after. For the implementation of the clinical guidelines for suicide attempters the inclusion criteria were patient records from adult men and women, which were appraised at a psychiatric emergency clinic after a suicide attempt. From the two intervention clinics 121 records were selected before and 120 records six months after the start of the implementation process. At the control clinics 61 records were selected before and 60 records after. The inter-rater reliability revealed a Cohen's Kappa statistics of 0.92–1.0, lowest for the assessment of the documentation of treatment plan (care plan).

Measure of compliance

Requirements of recommended practice included in the regional clinical guidelines for depression and for suicide attempters released in 2002 and 2003 were used as indicators of compliance [14,15]. Process indicators developed to assess the quality of care for depression and for suicidal patient were used. To assess whether the requirements were met (and thus the indicator "present") when interpreting notes in the medical records a modified audit instrument by Gardulf and Nordström [18] was used. Absence of data was assumed to indicate that the relevant activities had not been undertaken. Presence of an indicator was given a score from 0–2, (0 = absence, 1 = present but not exactly documented according to the definition and 2 = a clear occurrence). For example a score of one was given if the suicide assessment were not structured according to definition and fulfilled the criteria. The scale was dichotomised into 0 and 1 (1 and 2). Total score for the guidelines for treatment of depression were 11 points and 13 for the guidelines for treatment of suicidal behaviours.

Indicators from the clinical guidelines for depression

• The time between referral and contact is recorded.

• Diagnostic assessment: the medical record should include at least three of the DSM-IV symptoms for Major Depression [17].

• Standardized rating scale: clinical depression assessment performed using a standardized rating scale.

• Diagnostic structured instrument documented, e.g. Structured Clinical Interview for DSM-IV-TR (SCID) [19].

• The use of a standardized rating scale during treatment for assessment of symptoms and behaviour documented.

• Substance/drug abuse assessment performed using screening instrument e.g. The Alcohol Use Disorders Identification Test (AUDIT) [20].

• Treatment plan (care plan) documented.

• Evaluation of outcome included, e.g. documentation of whether the patient had responded to antidepressive treatment, achieved symptom remission or reduction of symptoms between admission and follow-up.

• Continuity: ability to provide uninterrupted care over time measured as numbers of clinicians involved per patient episode.

• Structured suicide assessment using a standardized rating scale documented.

• Antidepressant medication documented, started at the second visit.

Indicators from the clinical guidelines for suicide attempters

All indicators listed above, and the following additional:

• Specialist assessment: a documented assessment done by a senior physician within 24 hours after the suicide attempt.

• Follow-up assessments performed.

• Evaluation assessment after discharge documented.

Data collection

Staff from the participating local teams at each clinic reviewed the medical records and documented the presence of the indicators of compliance. The first author instructed them and a consensus meeting was held, including a calibrating process. The first author used a random replicate sample of 40 medical records to assess inter-rater reliability. In addition, the staff doing reviews received regular tutoring. The study took place over a six-month period and the first data collection was performed in May 2003, before the implementation. The second data collection took place in November 2003. Data from administrative information system were used to identify records that fulfilled the inclusion criteria, and samples from each clinic were selected at random dates during the study period. This study were approved by The Central Ethical Review Board at Karolinska Institutet.

Statistical analysis

The data were analysed using the SPSS for Windows, Version 15. The inter-rater reliability was analysed by calculating Cohen's Kappa. The statistical significances of the differences before and after the implementation were calculated using Chi-square tests. Associations between age, gender and percentages of patients being treated in accordance with each indicator was analysed using Pearson correlation tests. Age and gender adjusted odds ratios were calculated in order to analyse the six months compliance after the implementation. Multiple regression analyses was performed to control for age and gender in change of total scores in intervention and control groups.

Results

Compliance to the guidelines for depressive disorders

At baseline 122 patient records were included at the implementation clinics and 61 at the control clinics. There were no age or gender differences between patients from the intervention and control clinics. The percentages of patients being treated in accordance with each indicator at the implementation and control clinics are presented in Table 1, column 1 and 3. Some of the indicators were more often recorded in the implementation clinics; accessibility, diagnostic instrument, standardized rating scale initially and during treatment, substance/drug abuse and treatment plan. The total score differed significantly between implementation clinics and control clinics. Multiple linear regression analysis was used to compare change in total depression scores in intervention and control groups controlling for age and gender. The Beta value was significant for group (Beta = 0.54, p < 0.001), but not for age (Beta = -0.01, p < 0.9, or gender Beta = 0.08, p < 0.2).

Table 1. Percentages of patients being treated in accordance with each indicator at baseline before the implementation of clinical guidelines.

At the six months follow-up 120 new patient records were included at the implementation clinics and 60 at the control clinics. There were no gender differences, but the mean age of the patients was slightly lower at the implementation clinics (35.4 years (SD 11.4) versus 38.6 years (SD 9.6), t = 1.9, df 178, p < 0.1). The only quality indicator that had an association with age was evaluation of outcome, which was less often registered in patients with higher age. The percentages and numbers of patients being treated in accordance with each indicator at follow-up are presented in Table 1, column 2 and 6. Age and gender adjusted odds ratios for the compliance for each depression indicator at the implementations clinics at the six months follow-up is presented in Table 2. The compliance was better in 8 of 11 depression indicators in the implementation group than the control group. For the indicator evaluation of outcome analyses were divided in different age groups (18–34 years (n = 125), 35–49 years (n = 80) and ≥50 years (n = 37)). In none of the three age groups the OR was significant. The results from the control clinics are presented as a comparison and only substance/drug abuse was significant. The mean score increased 3.4 in the implementation clinics and decreased 0.2 in the control clinics (t = 19.7, p < 0.001).

Table 2. The odds ratio of compliance six months after the implementation of clinical guidelines for the management of depression.

Compliance to the guidelines for the management of suicide attempters

At baseline 121 patient records were included at the implementation clinics and at the control clinics 61 records. There were no gender differences but the patients at the implementation clinics were slightly younger (32.5 years (8 SD 12.2) versus 38.3 years (SD 15.1), t = 2.8, df 180, p < 0.01). The only indicator that differed in registration was continuity of care giver that was less often recorded in older patients. The differences between the implementation clinics and the control clinics in the documentation of the quality indicators are presented in Table 1, column 1 and 3. Some of the indicators were more often recorded in the implementation clinics, diagnostic assessment, standardized rating scale initially, evaluation and evaluation assessment. Others were more frequently recorded at the control clinics, accessibility, substance/drug abuse, suicide and specialist assessment. The mean score did not differ between implementation clinics and control clinics. For continuity of care giver separate analyses were made in three age-groups (18–34, 35–49 and ≥50 years). There were differences between the implementation and control clinic in the in the two younger age-groups but not in the oldest (Chisq 12.4, df 1, p < 0.001, Chisq 19.0, df 1, p < 0.01, ns)

At the six months follow-up 120 patient records were included at the implementation clinics and at the control clinics 60 records. There were no age differences, but the patients at the implementation clinics were more often females (70.8% versus 58.3%, chisq = 2.8, df 1, p < 0.1)). The only quality indicator that had an association with gender was specialist assessment, which was less often registered in females. The percentages and numbers of patients being treated in accordance with each indicator at follow-up is presented in Table 1, column 2 and 6. Age and gender adjusted odds ratios for the compliance for each indicator at the implementations clinics at the six months follow-up are presented in Table 3. For the indicator specialist assessment analyses were divided according to gender. In both gender the OR was significant; females: 3.4(1.7–6.9), males 33.0(6.8–161.3). The results from the control clinics are presented as a comparison and none of the OR was significant. The mean score increased 2.8 in the implementation clinics and decreased 0.8 in the control clinics (t = 9.5, p < 0.001). A similar procedure as for comparing total depression score was performed for total suicide scores and the Beta value was significant for group (Beta = 0.3, p < 0.001, but not for age (Beta = -0.04, p < 0.4, or gender Beta = 0.03, p < 0.5).

Table 3. The odds ratio of compliance six months after the implementation of clinical guidelines for the management of suicidal patients.

Discussion

This study showed that six months compliance to clinical guidelines for the treatment of depression and the management of suicide attempters measured by indicators of required clinical practice was enhanced by an active implementation. The clinics, to which the guidelines were only disseminated, showed no improvement.

A multifaceted intervention including several active strategies is more likely to be effective than a single active strategy [21]. Interactive approaches, such as audit, feedback, academic detailing seems most effective at changing physician care and patient outcome, but are insufficient by them self [22]. Shortell et al. [23] suggested that four dimensions are needed for a successful implementation, i.e. process, strategic, cultural, technical and structural. Our implementation programme included all of those. The programme consisted of the introduction of regional mandatory evidence-based clinical guidelines for psychiatric disorders in Stockholm County, a top-down strategic initiative. The programme encouraged the formation of local teams at the clinics which created interest, engagement and motivation, giving rise to a culture positive to guidelines implementation. We acted on the knowledge that without an appropriate organizational culture, only small, temporary improvements will be possible [23]. Measurements were based on indicators derived from the guidelines' evidence based requirements of preferred practice, giving the implementation a clear structure. The use of the indicators enabled regular feedback on gaps in performance, compared to guidelines, which was strengthened by outreach visits by the researcher, an expert on the guidelines. Conditions for a learning process were created. These elements of the programme are strategies previously reported to be useful in implementation [10]. A summary of the implementation programme is provided in Table 4.

Table 4. The four dimensions of the implementation process at intervention and control clinics.

The process indicators used in this study could also be used to assess the quality of the care documented in patient records. Analyses of the documentation in this study showed that the indicators had a high inter-rater reliability and were easy to use. The study also showed that the indicators were feasible for audit and feedback as part of the implementation strategy. The used indicators are objective measures that can be used by clinicians, managers and the public when assessing the quality of the process and the outcome of patient care [24,25]. Hermann and Palmer [26] have argued that process measures may well be used more effectively in metal health care.

In a previous study antidepressant dosage and duration adequacy have been used as guideline based process indicators to identify if depression care was based on clinical guidelines [27]. Indicators can be useful in determine the maintenance of a good standard of psychiatric service, and in detecting and stimulate solutions to problems found in service delivery.

This is a small study and the results are based on patient records and process quality scores only. Therefore, the study needs to be replicated to confirm that random change and selection bias have not combined to produce a spurious result. We plan to do this using other clinics in a near future. The intervention clinics had all volunteered to participate and, thus, they probably consisted of the most motivated and quality-focused in the region, potentially introducing a bias. The fact that indicator documentation for depression patients at base-line was more frequent among intervention clinics supports that assumption. When managing suicide attempters the picture at baseline was mixed, with some indicators more frequently documented in control clinics.

Another limitation was that the study relied on self-reporting. Local teams at the clinics assessed the patient records for indicators. However, a supervisor (the first author) paid regular visits to the clinic, discussing the registration. A random sample of records was reassessed by the first author revealing an acceptable inter-rater reliability (Kappa 0.92–1.0).

The strengths of the study are its longitudinal nature and the fact that it was a quasi-experimental involving measures before-and-after the intervention with a comparison group. However, the follow-up was only six months and the long-term impact is unknown.

This implementation model, although promising in the light of our study, needs to be further tested. In this study we show that these indicators were easy to use and had high inter-rater reliability. As indicated in the literature, they can also be used for quality assessment. The challenge in all implementation work is to achieve improvement, sustainable over time. We plan to follow-up the studied clinics to learn more about their guidelines compliance and the effects of using the indicators systematically. The experiences gained from the local implementation will contribute to the modification, revision and adaptation of the guidelines, further enhancing their perceived usefulness and local application.

Conclusion

The findings in this study suggest that compliance to clinical guidelines measured by indicators of required clinical practice was enhanced by an active implementation. The active implementation included four dimensions: strategic, cultural, technical and structural.

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

TF, AÅW, MB and YF have all participated to the design of the study. TF and YF have analyzed the data. All authors read and approved the final manuscript.

Acknowledgements

This research was supported by the Stockholm County Council, Sweden. The authors wish to express their thanks to all the participants in the implementations process and Susanne Wicks, statistician.

References

  1. Straus SE, Richardson WS, Glaszious P, Haynes RB: Evidence-based Medicine: how to practice and teach EBM. Third edition. London: Churchill Livingstone; 2005. OpenURL

  2. Grol R, Grimshaw J: From best evidence to best practice: effective implementation of change in patients' care.

    Lancet 2003, 362(9391):1225-1230. PubMed Abstract | Publisher Full Text OpenURL

  3. Grimshaw JM, Thomas RE, MacLennan G, Fraser C, Ramsay CR, Vale L, Whitty P, Eccles MP, Matowe L, Shirran L, Wensing M, Dijkstra R, Donaldson C: Effectiveness and efficiency of guideline dissemination and implementation strategies.

    Health Technol Assess 2004, 8(6):iii-iv.

    1–72

    PubMed Abstract | Publisher Full Text OpenURL

  4. Michie S, Pilling S, Garety P, Whitty P, Eccles MP, Johnston M, Simmons J: Difficulties implementing a mental health guideline: an exploratory investigation using psychological theory.

    Implement Sci 2007, 2:8. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text OpenURL

  5. Roffman JL, Simon AB, Prasad KM, Truman CJ, Morrison J, Ernst CL: Neuroscience in psychiatry training: how much do residents need to know?

    The American journal of psychiatry 2006, 163(5):919-926. PubMed Abstract | Publisher Full Text OpenURL

  6. Kramer TL, Daniels AS, Zieman GL, Williams C, Dewan NA: Psychiatric practice variations in the diagnosis and treatment of major depression.

    Psychiatr Serv 2000, 51(3):336-340. PubMed Abstract | Publisher Full Text OpenURL

  7. Hermann RC: Improving Mental Healthcare: A Guide to Measurement-Based Quality Improvement. Washington: American Psychiatric Publishing; 2005. OpenURL

  8. Gartlehner G, West SL, Lohr KN, Kahwati L, Johnson JG, Harris RP, Whitener L, Voisin CE, Sutton S: Assessing the need to update prevention guidelines: a comparison of two methods.

    Int J Qual Health Care 2004, 16(5):399-406. PubMed Abstract | Publisher Full Text OpenURL

  9. Barosi G: Strategies for dissemination and implementation of guidelines.

    Neurol Sci 2006, 27(Suppl 3):S231-234. PubMed Abstract | Publisher Full Text OpenURL

  10. Grol R: Successes and failures in the implementation of evidence-based guidelines for clinical practice.

    Medical care 2001, 39(8 Suppl 2):II46-54. PubMed Abstract OpenURL

  11. Grol R, Wensing M, Eccles M: Improving Patient Care: The Implementation of Change in Clinical Practice. Oxford: Elsevier; 2004. OpenURL

  12. Charbonneau A, Rosen AK, Ash AS, Owen RR, Kader B, Spiro A 3rd, Hankin C, Herz LR, Jo VPM, Kazis L, Miller DR, Berlowitz DR: Measuring the quality of depression care in a large integrated health system.

    Medical care 2003, 41(5):669-680. PubMed Abstract | Publisher Full Text OpenURL

  13. Weinmann S, Koesters M, Becker T: Effects of implementation of psychiatric guidelines on provider performance and patient outcome: systematic review.

    Acta Psychiatr Scand 2007, 115(6):420-433. PubMed Abstract | Publisher Full Text OpenURL

  14. Medicinskt programarbete: Regionalt vårdprogram för depressionssjukdomar inkl. mano-depressiv sjukdom. First edition. Stockholm County Council; 2005. OpenURL

  15. Medicinskt programarbete: Regionalt vårdprogram – vård av suicidnära patienter. Stockholm County Council; 2002. OpenURL

  16. World Health Organization: International statistical classification of diseases and related health problems, Tenth Revision. Second edition. Geneva, Switzerland: World Health Organization; 2004. OpenURL

  17. American Psychiatric Association: Diagnostic and statistical manual of mental disorders. Fourth edition. Washington, DC American Psychiatric Press; 1994. OpenURL

  18. Nordstrom G, Gardulf A: Nursing documentation in patient records.

    Scand J Caring Sci 1996, 10(1):27-33. PubMed Abstract OpenURL

  19. First MB, Spitzer RL, Gibbon M, Williams JBW: Structured Clinical Interview for DSM-IV Axis I Disorders, Clinician Version (SCID-CV). Washington, D.C.: American Psychiatric Press; 1996. OpenURL

  20. Reinert DF, Allen JP: The Alcohol Use Disorders Identification Test (AUDIT): a review of recent research.

    Alcohol Clin Exp Res 2002, 26(2):272-279. PubMed Abstract | Publisher Full Text OpenURL

  21. Shojania KG, Grimshaw JM: Evidence-based quality improvement: the state of the science.

    Health affairs (Project Hope) 2005, 24(1):138-150. PubMed Abstract | Publisher Full Text OpenURL

  22. Jamtvedt G, Young JM, Kristoffersen DT, O'Brien MA, Oxman AD: Audit and feedback: effects on professional practice and health care outcomes.

    Cochrane database of systematic reviews (Online) 2006, 2:CD000259. PubMed Abstract | Publisher Full Text OpenURL

  23. Shortell SM, Bennett CL, Byck GR: Assessing the impact of continuous quality improvement on clinical practice: what it will take to accelerate progress.

    The Milbank quarterly 1998, 76(4):593-624. PubMed Abstract | Publisher Full Text OpenURL

  24. Campbell SM, Braspenning J, Hutchinson A, Marshall M: Research methods used in developing and applying quality indicators in primary care.

    Quality & safety in health care 2002, 11(4):358-364. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL

  25. Shield T, Campbell S, Rogers A, Worrall A, Chew-Graham C, Gask L: Quality indicators for primary care mental health services.

    Quality & safety in health care 2003, 12(2):100-106. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL

  26. Hermann RC, Palmer RH: Common ground: a framework for selecting core quality measures for mental health and substance abuse care.

    Psychiatr Serv 2002, 53(3):281-287. PubMed Abstract | Publisher Full Text OpenURL

  27. Charbonneau A, Rosen AK, Owen RR, Spiro A, Ash AS, Miller DR, Kazis L, Kader B, Cunningham F, Berlowitz DR: Monitoring depression care: in search of an accurate quality indicator.

    Medical care 2004, 42(6):522-531. PubMed Abstract | Publisher Full Text OpenURL

Pre-publication history

The pre-publication history for this paper can be accessed here:

http://www.biomedcentral.com/1471-244X/8/64/prepub