Skip to main content

OPTIGOV - A new methodology for evaluating Clinical Governance implementation by health providers

Abstract

Background

The aim of Clinical Governance (CG) is to the pursuit of quality in health care through the integration of all the activities impacting on the patient into a single strategy.

OPTIGOV (Optimizing Health Care Governance) is a methodology for the assessment of the level of implementation of CG within healthcare organizations. The aim of this paper is to explain the process underlying the development of OPTIGOV, and describe its characteristics and steps.

Methods

OPTIGOV was developed in 2006 by the Institute of Hygiene of the Catholic University of the Sacred Heart and Eurogroup Consulting Alliance. The main steps of the process were: choice of areas for analysis and questionnaire development, based on a review of scientific literature; assignment of scores and weights to individual questions and areas; implementation of a software interfaceable with Microsoft Office.

Results

OPTIGOV consists of: a) a hospital audit with a structured approach; b) development of an improvement operational plan. A questionnaire divided into 13 areas of analysis is used. For each area there is a form with a variable number of questions and "closed" answers. A score is assigned to each answer, area of analysis, healthcare department and unit. The single scores can be gathered for the organization as a whole.

The software application allows for collation of data, calculation of scores and development of benchmarks to allow comparisons between healthcare organizations. Implementation consists of three stages: the preparation phase includes a kick off meeting, selection of interviewees and development of a survey plan. The registration phase includes hospital audits, reviewing of hospital documentation, data collection and score processing. Lastly, results are processed, inserted into a final report, and discussed in a meeting with the Hospital Board and in a final workshop.

Conclusions

The OPTIGOV methodology for the evaluation of CG implementation was developed with an evidence-based approach. The ongoing adoption of OPTIGOV in several projects will put to the test its potential to realistically represent the organization status, pinpoint criticalities and transferable best practices, provide a plan for improvement, and contribute to triggering changes and pursuit of quality in health care.

Peer Review reports

Background

The ever-increasing public concern over patient safety and quality in health care was the key driver of the development of Clinical Governance [1–3].

In 1997, the UK Department of Health published the White Paper "The New NHS: modern, dependable" [4], which introduced the concept of CG as a method of accounting for clinical quality in health care. However, CG really came to prominence in 1998 when Scally and Donaldson, in the British Medical Journal, appraised CG as the key drive towards quality improvement in the National Health Service (NHS). In this paper CG has been defined as "a system through which healthcare organizations are accountable for continuously improving the quality of their services and safeguarding high standards of care by creating an environment in which excellence in clinical care will flourish" [5].

The paper highlighted four components of quality, initially identified by the World Health Organization:

  • Professional performance (technical quality)

  • Resource use (efficiency)

  • Risk management (risk of injury or illness associated with the service provided)

  • Patient satisfaction with the service provided.

These four components formed the basis of CG in the Scottish and English Health Service [6].

Since the publication of the White Paper, CG profile and emphasis on its application have been growing steadily and a number of international healthcare systems have embraced the principles of CG, so that these are now of value worldwide in terms of quality, effectiveness and accountability [7–10]. In Italy, the principles of CG were initially mentioned in the January 14th, 1997 Republic President Decree (RPD), with particular reference to:

  • the need for quality to be guaranteed by health care providers and to be monitored through specific indicators;

  • the opportunity for the Patient/Citizen to freely choose among different health care organizations all meeting the same quality standards.

To this purpose, the 1997 RPD established the "minimum structural, technological and organizational accreditation criteria" (i.e. minimum number of rooms, minimum size of rooms, emergency lighting, medical gas plant among structural criteria; minimum availability of equipment for diagnostic and therapeutic services among technological criteria; number of doctors, nurses and technicians among organizational criteria). They are specific requirements that health care organizations have to meet in order to be permitted to provide health care in Italy, and which are verified by Regional authorities in the setting of a formal process. CG has been subsequently defined in the 1999-2001 Regional Healthcare Plan of Emilia Romagna, the first Italian Region to implement CG elements, as: "the heart of health organizations". The plan asserts the need to use suitable tools to avoid risks, quickly pinpoint adverse events, learn from mistakes, encourage good clinical practice and continuously improve it [11].

In a field characterized by plenty of tools for measuring and improving quality, CG aims to reveal that quality can be improved only by a general vision of organization, whose keys are the functional relationships among the different parts of the system [12, 13]. CG is directed towards the integration of all the activities impacting on the patient into a single strategy, with different key components: research and development; education; continuous training and professional development; evidence based practice; clinical audits; clinical practice variability reduction; clinical leadership; team working and partnership promotion; performance measurement and appraisal; clinical risk management; patients and health care professionals involvement [14, 15].

With the aim of assessing the implementation level of CG prerequisites and areas within an healthcare organization, trigger changes in clinical practice and promote a quality-oriented culture, a methodology called "OPTIGOV" (Optimizing Health Care Governance) was developed between March and December 2006 in Rome by the Department of Public Health of the Catholic University of the Sacred Heart, with the technical support of Eurogroup Consulting Alliance. OPTIGOV is addressed to health care professionals, both those with managerial roles (managing director, administration director and managers) and those with an assisting role with the patient within the clinical process.

The aim of this paper is to explain the process underlying the development of OPTIGOV, and describe in detail its constitutive elements, characteristics and steps.

Methods

Choice of analysis areas and questionnaire development

In January 2006, a multidisciplinary team was established consisting of 6 CG experts with a medical background (physicians expert in public health), 1 expert in governance and business management with a non medical background and 1 software developer. The team performed a systematic review of scientific literature consistently with the QUOROM statement (the PRISMA statement is the updated version currently available) [16, 17]. PubMed and Embase databases and the key words "Clinical Governance"; "Quality of Health Care"; "Health Services Evaluation" were used. The time limits of the research ranged from January 1st 1996 to October 15th 2006. In PubMed 705 citations, in Embase 571 citations were found. A total number of 321 articles were selected based on title and abstract reading: 116 were full text articles, 69 of which were used by the team to develop the OPTIGOV methodology, based on the following internal inclusion criteria:

  • CG definition;

  • CG prerequisites and areas definition and description;

  • description of best practices of implementation of CG areas;

  • attempts to measure the level of application of CG practices in single health care structures;

  • health care and services quality indicators.

The review was performed by 4 of the 6 experts of CG with a medical background (junior researchers) and any disputes were resolved trough the consultation of the other 2 experts (senior researchers) [16, 17].

The most explanatory definitions of CG were looked up, beginning with the basic reference sources of the discipline that put into motion discussion on CG, the subsequent observations by the same Authors, as well as works on Evidence-Based Medicine [5, 18–23]. Areas to which the perspective of CG has been applied within health care organizations since the introduction of the notion were taken into account. In each area, best practices were identified based on evidence on their efficacy, effectiveness and impact on quality improvement. Each classical area of CG was broken down into a series of constituent aspects (subareas) and each subarea was associated with one or more best practices. Previous attempts to measure the level of application of CG practices in single health care structures and indicators suitable for the measurement of the degree of application of the best practices so far identified were taken into account [24–26].

The team devised a series of questions capable of assessing the selected practices. Each area was associated with a form consisting of primary questions ("mother questions"), further developed by a series of secondary questions ("child questions"). The secondary questions have the purpose of specifically evaluating the single aspects of each global area. In choosing and phrasing the questions, a series of practical needs were taken into account. To safeguard objectivity and reproducibility, multiple choice questions were chosen. To ensure ease of application, questions were kept as short as possible, and their number as limited as possible, without compromising a complete coverage of all areas of CG.

Thanks to a pilot study carried out within a Scientific Research and Care Institute in Rome, the team validated the questionnaire by checking the questions were easy to understand and not liable to misinterpretation for either the interviewer or the interviewee, and by identifying the professional position that would have best answered the questions for each area.

Scoring system

Each question was assigned a score, so that all the answers totalled up to a maximum global score of 100 for each area of analysis. The global score for CG was determined to be obtained by assigning each area a weight of 1/13.

Level of interest of the questions

The level of interest of the questions was chosen by some authors (WR and GLT) according to the level of recommendation and criteria suggested by the scientific literature.

  • Every question considered to have the lowest level of interest was designated with C.

  • Every question considered to have an intermediate level of interest was designated with B, equal to 2Cs.

  • Every question considered to have the highest level of interest was designated with A, equal to 2Bs and to 4Cs.

Weighing system

  • All A, B and C questions were counted.

  • The number of C questions represented the base.

  • Since B = 2Cs, the number of B questions was multiplied by 2.

  • Since A = 4Cs, the number of A questions was multiplied by 4.

  • The maximum weight of all the answers of the form (100) was divided by the total of the result from A, B and C and the weight of the answer to C questions was obtained (if the form presented only questions A and B, the maximum weight of all the answers of the form (100) was divided by the result from A and B to obtain the weight of the answer to B questions).

  • The weight of the answer to C questions was multiplied by 2 (B = 2Cs) to obtain the weight of the answer to B questions.

  • The weight of answer to C questions was multiplied by 4 (A = 4Cs) to obtain the weight of the answer to A questions.

Assignment of weights to the forms (areas)

Different types of health care institutions have different callings and features, and cannot be expected to devote the same degree of effort to the same areas. Accordingly, a different weighing system for questionnaire forms (areas) was used for different institutions, so that the relative contribution of each form (area) to the global score depended on the type of institution being examined: Teaching Hospital, Scientific Research and Care Institute, General Hospital, Classified Hospital, Local Health Unit (LHU) Hospital.

Weights have been given using a Delphi approach. A panel of clinicians (medical doctors and surgeons) were asked to indicate the weights they considered most appropriate for the different healthcare organizations: the relative significance of the form areas was obtained for each type of organization, based on their case-mix index.

In order to facilitate the application of the methodology, a series of practical indications was also introduced and collected into an "interviewer handbook".

The internal consistency of the tool was measured by Chronbach's alpha.

Software development

A software application has been developed for OPTIGOV based on a template from Microsoft Office products, to allow an easy interface with them. It has been structured into two main modules:

  • the first one, based on a series of MS Excel sheets, for data interface and score processing;

  • the second one, based on MS Access, carrying out data filling and processing.

Accordingly, the application allows three levels of interaction: interface, score processing and data processing (see below).

Results

The OPTIGOV methodology

The OPTIGOV methodology, aimed at the review of the degree of implementation of CG within a healthcare organization, is based on:

  1. a)

    interviews, supported by a questionnaire, the Clinical Governance Scorecard (CGS);

  2. b)

    review of hospital documentation;

  3. c)

    data collection and analysis;

  4. d)

    elaboration and development of operational plans for improvement, according to the priorities identified for the particular institution.

The OPTIGOV methodology consists of the following steps:

  • analysis of CG structural and functional pre-requisites and areas;

  • evaluation of these elements by a global score (with a maximum value = 100) derived from weighing of partial scores (subareas);

  • identification of strengths and weaknesses of the organization;

  • provision of suggestions and indications to the Hospital management, in order to trigger tangible improvement actions;

  • creation of an up-to-date database with the results of the analysis;

  • monitoring of health care services quality improvement.

Questionnaire

The questionnaire consists of 13 analysis forms: the first 4 (indicated as A, B, C and D) refer to the essential structural and functional prerequisites for efficient adoption of the tools of CG, the other 9 (numbered as 1, 2, 3, 4, 5, 6, 7, 8 and 9) investigate the single CG areas and are aimed at the evaluation of the effective application level of each tool. CG tools are represented by all those instruments that ensure the pursuit of quality in health care (e.g. use of guidelines, clinical pathways and procedures, risk mapping, incident reporting systems, quality standards). OPTIGOV investigates the existence of CG tools within a health care organization through specific questions and evaluates their utilization or effectiveness level through a review process of hospital documentation. Each area is explored by a form with a variable number of questions and "closed" single or multiple answers; the total number of questions is 179 (Table 1). The OPTIGOV methodology allows a score to be assigned to each area of analysis, healthcare department and unit - the department being an organizational element defined by the Italian Ministry of Health as "an integrated organization of homogeneous, similar or complementary units [...] which all pursue common health outcomes [27]. The single scores can be gathered for the organization as a whole, thereby allowing for a global evaluation (see above).

Table 1 OPTIGOV analysis areas; first 3 questions of each form, including an example of weighted scores

Internal consistency, measured by Chronbach's alpha, showed good results for the vast majority of the areas (Table 2). The different weights given to the forms (areas) depending on the type of institution being examined are shown in Table 3.

Table 2 Results of the internal consistency test of the questionnaire areas
Table 3 Relative contribution of each form (area) to the global score in different types of institutions

The examiners are members of a Project Team, composed of public health professionals and management analysts belonging to:

  • the Clinical Governance Unit of the Institute of Hygiene of the Catholic University of the Sacred Heart, Rome, Italy;

  • the Public Health and Microbiology Department, University of Turin, Italy.

They are therefore drawn from outside the organization under review, to ensure the objectivity of the diagnostic review. Project Team members are carefully trained in the knowledge and application of the methodology by the multidisciplinary team who developed the methodology. Questionnaire is completed face to face by the expert compilers of the Project Team, who record answers provided by interviewees.

Software application

The software application allows the expert compilers who perform audits to collate all of the required data, calculate the scores, make comparisons intra- and inter-healthcare organizations and develop benchmarks.

The application consists of three levels:

  1. 1.

    the first level is the interface used by the expert to register the answers electronically;

  2. 2.

    the second level is the score processing, which provides scores referring to areas, departments, units and hospital;

  3. 3.

    the third level is the data processing, useful in carrying out comparisons, formulate statistics and indicators and create graphics.

The interfaces provided are identical to the paper questionnaire, so that the compiler can decide to fill in the questionnaire on his laptop or do it afterwards.

The questions are structured in a hierarchical order: there are basic questions ("mother questions") that trigger other questions ("child questions"). The hierarchy influences the scoring system: the "mother question" has a heavier weight than the "child question" (see above).

The database is structured in three hierarchical levels: level 1 includes all the separate questions, without grouping or aggregation, level 2 includes the questions recorded by area and by unit - each area can refer to many units - and level 3 includes the questions recorded by department and whole health care organisation.

Interviews, Interviewees and data sources

There are two different interviews levels and models: Board and Unit. The Board is represented by the top management of the health care organization, the Unit is represented by each single ward. Board level interviews require 10 forms to be administered: A, B, C, D, 4, 5, 6, 7, 8, 9; Unit level interviews require 6 forms to be administered: 1, 2, 3, 5, 8, 9. Interviewees, who are represented by health professionals operating at all levels within the organization, are selected by the Project Team in partnership with the healthcare organization's Board, which is aware of the specific roles and levels of responsibility within the organization and can therefore indicate the most appropriate interlocutors to report on specific areas. Figures 1 and 2 show the usual interviewees selected for the Board and Unit interviews respectively. In order to review hospital documentation and collect data about its structural features (e.g. number of departments, units, beds), human resources (e.g. number of doctors, nurses, technicians) and activity indicators (e.g. value of production, number and average weight of hospitalizations, beds occupation rate, surgical cases %), a series of data sources must be consulted (Table 4).

Figure 1
figure 1

Board level Interviews.

Figure 2
figure 2

Unit level Interviews.

Table 4 Essential data sources to be reviewed

Actions

OPTIGOV requires a specific sequence of actions to be taken. The Project Team is created (see above) which identifies areas of specific interest (high level management and management staff, administration and health departments and units), carries out the diagnostic review and the evaluation activity via hospital audits by acquiring a set of pre-established data, performing interviews and reviewing hospital documentation. Information gained through the interviews are then compared with the contents of the documentation and finally filled in the registration forms.

Data are then processed, and a set of evaluations and scores per area, department, unit and hospital is produced. The results of the review are summarized in a final report. The report should indicate strengths and weaknesses, criticalities and transferable best practices of the healthcare organization; provide suggestions and indications to the Management; describe an operational plan for change which specifies the possible future priorities and interventions for improvement. The timetable may include short, middle and long term changes, depending on the criticalities detected. After a meeting with the Board for a first evaluation of the results of the analysis, a final workshop is set up to share the conclusions with all the stakeholders. The timeframe of application of OPTIGOV is about 5 weeks (figure 3).

Figure 3
figure 3

Timeframe of the OPTIGOV methodology application.

The next stage is represented by the implementation of the proposed changes in an operational plan: the analysed healthcare organizations undertake specific improvement actions, in order to increase the quality of services and processes. Both the Project Team and the Board of the healthcare organization are involved in the implementation process. The effective degree of improvement of the quality of services is monitored by further diagnostic reviews, the results of which are compared with the previous organization status.

Discussion

It is widely accepted that the term CG describes an organizational accountability framework useful to improve clinical care, safeguard standards and work towards excellence [28]. The introduction of CG into the National Health Service can be seen as a fundamental shift in the regulatory relationship between the state and medical professionals [29], while at the organizational level it can be considered a process that involves a move towards "encoded knowledge" through the use of "soft bureaucracy" [30]. In this view, according to Iedema et al (2005), one can distinguish between moralizing and disciplinary devices, the latter useful for inspecting data generation and analysis, performance monitoring and management, accreditation, guidelines and protocol production and implementation, and the close integration of clinical and financial data [31].

The OPTIGOV methodology, described in this article, can be seen in the perspective of the above mentioned disciplinary devices. In particular, it may appear to be similar in nature to hospital accreditation protocols [32–35] and a degree of overlap does in fact exist. Nevertheless, the distinctive perspective of OPTIGOV is focused on the existence and the level of implementation of CG tools. Unlike accreditation agencies, OPTIGOV does not issue any certifications, but is solely aimed at implementing CG and improving the quality of health care. Therefore traditional accreditation agencies and OPTIGOV reflect a different culture towards quality improvement in those who choose them and could have a different impact on the cultural change of health professionals operating at all levels within the organization.

Furthermore, although attempts to address the question of organic and flexible evaluation of the implementation of CG have already been made, the evaluation systems they have produced are either experimental ones, with a broadly qualitative approach [24] or else they suffer from a limited scope, as they were designed for very specific sectors or one-off studies [25, 26].

Surveys on the effectiveness of good practices in medicine have revealed that the single most important problem in their application is lack of dedicated time and resources as well as low motivation on the management side [36–39]. These observations suggest that, for good practices to be effective in a single institution, they must not only be known to health professionals, but they must represent a permanent part of work routines; and that it is reasonable to ascertain, as OPTIGOV does, that they are a well-defined component of normal behaviours and a priority shared by all members of the medical equipe - which may not be the case, even when physicians maintain they know the essentials of those practices.

An important aspect addressed since the early stages of the development of OPTIGOV was the choice of the subjects to be held responsible for each area and interviewed. Over the last decade, debate over the accountability in implementing CG has suggested that responsibility is shared by managers and physicians alike, though to different degrees [11, 41, 42]. This observation has been taken into account by the authors in establishing how to determine who should be interviewed about what.

The choice of the areas to be explored and the subareas to be considered as their components was based on the earlier definitions of CG, but subsequent discussion on the real contents of good practices as well as questions that have been raised by health care reforms were taken into account. In particular, consideration of the growing trend of transferring scientific research into more and more levels of health care and the subsequent need to evaluate research skills [43] supported the inclusion of the area "Research & Development" and the careful weighing of its contribution to the final score. The controversies over the perceived importance and the evolution of the practice of Clinical Audit [44, 45] were also considered when developing the correspondent set of questions and scores. Especially in the field of error and risk management, all of the aspects that were signalled as relevant in the literature were included [46, 47].

OPTIGOV is currently being put to the test in the setting of several projects, at the moment restricted to Italy, involving a number of health care institutions of different types [48]. The next step in its development will thus be the evaluation of the results of its application. The opinions of health administrators will be collected, so as to check for any weaknesses in its completeness, effectiveness, ease of applicability and flexibility. In particular, the ability of the results of an OPTIGOV diagnostic review to establish priorities, provide basis for tailored interventions and influence subsequent clinical and management decisions has already been partially assessed after the first testing of OPTIGOV [48]. The latter led in fact to the triggering and implementation of a series of improvement actions (e.g. activation of training programmes on evidence-based medicine and clinical audits, and definition and dissemination of risk management procedures), some of which are still in progress.

A long-term analysis of the effect of an OPTIGOV diagnostic review on the level of implementation of CG in the structures involved will also serve as an indirect evaluation of the improvement actions that have been introduced following the reviews themselves. The effectiveness of these actions will be tested through a comparison of data before and after the interventions prompted by OPTIGOV.

OPTIGOV also offers the opportunity to make intra- and inter-organizations comparisons, by showing differences in the level of adoption and spreading of adequate CG tools.

The development of the methodology could have been affected by some limitations. First, the selection of the areas to be considered and the questions they include could be questionable; however, we selected the areas on the basis of the definition of CG and a review of the scientific literature on this issue. Moreover, the first results of OPTIGOV [48] lead to hypothesize the reliability and reproducibility of the methodology.

Another critical point could be represented by the way data are collected in OPTIGOV and the risk of information bias. The main element which controls information bias is the double origin of data: face-to-face interviews of single professionals indicated by the organization's Board are the first source; then, the relevant official documentation of the organization is consulted, and it is compared to the information provided by the interviewees so as to correct it if contradictions are detected.

Conclusions

The OPTIGOV methodology aims to assess the CG implementation level within health care organizations. The results of the ongoing OPTIGOV implementation projects will allow to verify the following aspects of the methodology, based on scientific literature and on the observations of health administrators (see above):

  • its completeness and its coverage of the main CG best practices so far identified;

  • its accuracy in the choice of the indicators used to demonstrate the level of application of each practice;

  • its applicability and capability of covering different types of health care institutions.

OPTIGOV has the potential to produce a realistic representation of the organization status, to pinpoint both criticisms and transferable best practices. Thus it provides concrete plans for organizational change that increase the likelihood of improvements in the quality and excellence of health care.

References

  1. Wright J, Smith ML, Jackson DR: Clinical governance: principles into practice. J Manag Med. 1999, 13 (6): 457-65. 10.1108/02689239910299786.

    Article  CAS  PubMed  Google Scholar 

  2. Wilson J: Clinical governance. Br J Nurs. 1998, 7 (16): 987-8.

    Article  CAS  PubMed  Google Scholar 

  3. Roland M, Campbell S, Wilkin D: Clinical governance: a convincing strategy for quality improvement?. J Manag Med. 2001, 15 (3): 188-201. 10.1108/02689230110403678.

    Article  CAS  PubMed  Google Scholar 

  4. Department of Health: The New NHS modern. dependable. (accessed March 2010), [http://www.archive.official-documents.co.uk/document/doh/newnhs/contents.htm]

  5. Scally G, Donaldson LJ: The NHS's 50 anniversary. Clinical governance and the drive for quality improvement in the new NHS in England. BMJ. 1998, 317 (7150): 61-5.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  6. NHS Scotland, Educational Resources: Clinical Quality Improvement. (accessed March 2010), [http://www.clinicalgovernance.scot.nhs.uk/documents/clinical_quality_improvement_to_1997.pdf]

  7. Deighan M, Bullivant J: NHS Clinical Governance Support Team. Re-energising Clinical Governance through Integrated Governance. Clin Chem Lab Med. 2006, 44 (6): 692-3. 10.1515/CCLM.2006.132.

    Article  CAS  PubMed  Google Scholar 

  8. Perkins R, Pelkowitz A, Seddon M: EPIQ. Quality improvement in New Zealand healthcare. Part 7: clinical governance--an attempt to bring quality into reality. N Z Med J. 2006, 119 (1243): U2259.

    PubMed  Google Scholar 

  9. SteelFisher GK: International innovations in health care: quality improvements in the United Kingdom. Issue Brief (Commonw Fund). 2005, 833: 1-16.

    Google Scholar 

  10. Trabacchi V, Pasquarella C, Signorelli C: Evolution and practical application of the concept of clinical governance in Italy. Ann Ig. 2008, 20 (5): 509-15.

    CAS  PubMed  Google Scholar 

  11. Assessorato Sanità Regione Emilia Romagna: Piano Sanitario Regionale 1999-2001. [Regional Health Plan 1999-2001]. (accessed March 2010), [http://www.regione.emilia-romagna.it/sanita/psr/index.htm]

  12. Marshall M, Sheaff R, Rogers A, Campbell S, Halliwell S, Pickard S, Sibbald B, Roland M: A qualitative study of the cultural changes in primary care organizations needed to implement clinical governance. Br J Gen Pract. 2002, 52 (481): 641-5.

    PubMed  PubMed Central  Google Scholar 

  13. Freeman T, Walshe K: Achieving progress through clinical governance? A national study of health care managers' perceptions in the NHS in England. Qual Saf Health Care. 2004, 13 (5): 335-43. 10.1136/qshc.2002.005108.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  14. Allen P: Accountability for clinical governance: developing collective responsibility for quality in primary care. BMJ. 2000, 321 (7261): 608-11. 10.1136/bmj.321.7261.608.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  15. NHS Clinical Governance Support Team: What is Clinical Governance?. (accessed March 2010), [http://webarchive.nationalarchives.gov.uk/20081112112652/http://cgsupport.nhs.uk/About_CG/default.asp]

  16. Moher D, Cook DJ, Eastwood S, Olkin I, Rennie D, Stroup DF: Improving the quality of reports of meta-analyses of randomized controlled trials: the QUOROM statement. Lancet. 1999, 354 (9193): 1896-1900. 10.1016/S0140-6736(99)04149-5.

    Article  CAS  PubMed  Google Scholar 

  17. Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JPA, Clarke M, Devereaux PJ, Kleijnen J, Moher D: The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. Ital J Public Health. 2009, 6 (4): 354-91.

    Google Scholar 

  18. Ayres PJ, Wright J, Donaldson LJ: Achieving clinical effectiveness: the new world of clinical governance. Clinician in Management. 1998, 7: 106-11.

    Google Scholar 

  19. Department of Health: A First Class Service: Quality in the New NHS. (accessed March 2010), [http://www.dh.gov.uk/en/Publicationsandstatistics/Publications/PublicationsPolicyAndGuidance/DH_4006902]

  20. Lugon M, Secker-Walker J: Clinical Governance: making it happen. 1999, London: The Royal Society of Medicine Press Ltd

    Google Scholar 

  21. Halligan A, Donaldson L: Implementing clinical governance: turning vision into reality. BMJ. 2001, 322 (7299): 1413-7. 10.1136/bmj.322.7299.1413.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  22. Halligan A, Donaldson L: The meaning and implemention of clinical governance. G Ital Nefrol. 2002, 19 (Spec No 21): S8-13.

    PubMed  Google Scholar 

  23. Sackett D, Straus SE, Richardson WS: Evidence-based medicine: how to practice and teach EBM. 2005, Churchill Livingston, 3

    Google Scholar 

  24. Freeman T: Measuring progress in clinical governance: assessing the reliability and validity of the Clinical Governance Climate Questionnaire. Health Serv Manage Res. 2003, 16 (4): 234-50. 10.1258/095148403322488937.

    Article  CAS  PubMed  Google Scholar 

  25. Godden S, Majeed A, Pollock A, Bindman AB: How are primary care groups approaching clinical governance? A review of clinical governance plans from primary care groups in London. J Public Health Med. 2002, 24 (3): 165-9. 10.1093/pubmed/24.3.165.

    Article  PubMed  Google Scholar 

  26. Hartley AM, Griffiths RK, Saunders KL: An evaluation of clinical governance in the public health departments of the West Midlands Region. J Epidemiol Community Health. 2002, 56 (8): 563-8. 10.1136/jech.56.8.563.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  27. Italian Ministry of Health: Il Dipartimento. [The Department]. (accessed May 2010), [http://www.salute.gov.it/imgs/C_17_pagineAree_233_listaFile_itemName_0_file.pdf]

  28. Donaldson L, Gray J: Clinical Governance: a quality duty for health organizations. Quality in Health Care. 1998, 7 (suppl): 37-44.

    Google Scholar 

  29. Flynn R: Clinical governance and governmentality. Health, Risk and Society. 2002, 4 (2): 155-73. 10.1080/13698570220137042.

    Article  Google Scholar 

  30. Flynn R: Soft bureaucracy, governmentality and clinical governance: theoretical approaches to emergent policy. Governing medicine: theory and practice. 2004, Open University Press. Maidenhead: McGraw-Hill International, 11-26. 1

    Google Scholar 

  31. Iedema R, Braithwaite J, Jorm C, Nugus P, Whelan A: Clinical governance: complexities and promises. Health Care Reform and Industrial Change in Australia: Lessons, Challenges and Implications. Edited by: Stanton P, Willis E, Young S. 2005, Basingstoke: Palgrave McMillan, 253-78.

    Google Scholar 

  32. Schyve PM: The evolution of external quality evaluation: observation s from the Joint Commission on Accreditation of Healthcare Organizations. Int J Qual Health Care. 2000, 12 (3): 255-8. 10.1093/intqhc/12.3.255.

    Article  CAS  PubMed  Google Scholar 

  33. Donahue KT, vanOstenberg P: Joint Commission International accreditation: relationship to four models of evaluation. Int J Qual Health Care. 2000, 12 (3): 243-6. 10.1093/intqhc/12.3.243.

    Article  CAS  PubMed  Google Scholar 

  34. Collopy BT, Williams J, Rodgers L, Campbell J, Jenner N, Andrews N: The ACHS Care Evaluation Program: a decade of achievement. Australian Council on Healthcare Standards. J Qual Clin Pract. 2000, 20 (1): 36-41. 10.1046/j.1440-1762.2000.00346.x.

    Article  CAS  PubMed  Google Scholar 

  35. Shaw CD: Accreditation and ISO: international convergence on health care standards ISQua position paper--October 1996. Int J Qual Health Care. 1997, 9 (1): 11-3. 10.1093/intqhc/9.1.11.

    Article  CAS  PubMed  Google Scholar 

  36. Braithwaite J: An empirical assessment of social structural and cultural change in clinical directorates. Health Care Anal. 2006, 14 (4): 185-93. 10.1007/s10728-006-0025-5.

    Article  PubMed  Google Scholar 

  37. Braithwaite J, Westbrook MT, Mallock NA, Travaglia JF, Iedema RA: Experiences of health professionals who conducted root cause analyses after undergoing a safety improvement programme. Qual Saf Health Care. 2006, 15 (6): 393-9. 10.1136/qshc.2005.017525.

    Article  PubMed  PubMed Central  Google Scholar 

  38. Korst LM, Signer JM, Aydin CE, Fink A: Identifying organizational capacities and incentives for clinical data-sharing: the case of a regional perinatal information system. J Am Med Inform Assoc. 2008, 15 (2): 195-7. 10.1197/jamia.M2475.

    Article  PubMed  PubMed Central  Google Scholar 

  39. Scott L, Caress AL: Shared governance and shared leadership: meeting the challenge of implementation. J Nurs Manag. 2005, 13 (1): 4-12. 10.1111/j.1365-2834.2004.00455.x.

    Article  PubMed  Google Scholar 

  40. Piper IH: Clinical governance, the profession, the 'P' words and responsibility. Ann R Coll Surg Engl. 2000, 82 (Suppl 10): 344-5.

    CAS  PubMed  Google Scholar 

  41. Scott I: Time for a collective approach from medical specialists to clinical governance. Intern Med J. 2002, 32 (11): 499-501. 10.1046/j.1445-5994.2002.00299.x.

    Article  CAS  PubMed  Google Scholar 

  42. Sausman C: New roles and responsibilities of NHS chief executives in relation to quality and clinical governance. Qual Health Care. 2001, 10 (Suppl 2): 13-20.

    Google Scholar 

  43. Carter YH, Shaw S, Macfarlane F: Primary Care Research Team Assessment (PCRTA): development and evaluation. Occas Pap R Coll Gen Pract. 2002, 81: 1-72.

    Google Scholar 

  44. McConachie H, Logan S, Measure of Process of Care UK Validation Working Group: Validation of the measure of processes of care for use when there is no Child Development Centre. Child Care Health Dev. 2003, 29 (1): 35-45. 10.1046/j.1365-2214.2003.00314.x.

    Article  CAS  PubMed  Google Scholar 

  45. Davies HT: Developing effective clinical audit. Hosp med. 1999, 60 (10): 748-50.

    Article  CAS  PubMed  Google Scholar 

  46. Hill AP, Baeza J: Dealing with things that go wrong. Lancet. 1999, 354 (9196): 2099-100. 10.1016/S0140-6736(99)00443-2.

    Article  CAS  PubMed  Google Scholar 

  47. Smith LF, Harris D: Clinical governance--a new label for old ingredients: quality or quantity?. Br J Gen Pract. 1999, 49 (442): 339-40.

    CAS  PubMed  PubMed Central  Google Scholar 

  48. Specchia ML, Nardella P, Capizzi S, Donno S, La Torre G, Siliquini R, Ibba N, Campana A, Rabacchi G, Bertetto G, Ricciardi W: OPTIGOV: a Clinical Governance assessment tool. Results of an experimental programme in Piedmont Region. Clinical Governance. 2008, 4: 19-25.

    Google Scholar 

Pre-publication history

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Maria Lucia Specchia.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

GLT, RS, WR reviewed scientific literature, chose the analysis areas and developed the questionnaire; MLS, SC, PN, LV drafted the manuscript; AC developed the scoring system and the software. All the authors read and approved the manuscript.

Giuseppe La Torre, Roberta Siliquini, Silvio Capizzi, Luca Valerio, Pierangela Nardella, Alessandro Campana and Walter Ricciardi contributed equally to this work.

Authors’ original submitted files for images

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Specchia, M.L., La Torre, G., Siliquini, R. et al. OPTIGOV - A new methodology for evaluating Clinical Governance implementation by health providers. BMC Health Serv Res 10, 174 (2010). https://doi.org/10.1186/1472-6963-10-174

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1472-6963-10-174

Keywords