Open Access Research article

Measuring decision quality: psychometric evaluation of a new instrument for breast cancer surgery

Karen R Sepucha12*, Jeffrey K Belkora3, Yuchiao Chang12, Carol Cosenza4, Carrie A Levin5, Beverly Moy26, Ann Partridge27 and Clara N Lee8

Author affiliations

1 General Medicine Division, Massachusetts General Hospital, 50 Staniford Street, 9th floor, Boston, MA, 02114, USA

2 Harvard Medical School, Boston, MA, USA

3 Institute for Health Policy Studies, University of California, San Francisco, CA, USA

4 Center for Survey Research, University of Massachusetts, 100 Morrissey Boulevard, Boston, MA, USA

5 Informed Medical Decision Foundation, 40 Court Street, Boston, MA, USA

6 Massachusetts General Hospital Cancer Center, 55 Fruit Street, Boston, MA, USA

7 Dana-Farber Cancer Institute, Brigham and Women’s Hospital, Boston, MA, USA

8 Division of Plastic and Reconstructive Surgery, Lineberger Comprehensive Cancer Center, Sheps Center for Health Services Research, University of North Carolina, CB Box 7195, Chapel Hill, NC, 27599-7195, USA

For all author emails, please log on.

Citation and License

BMC Medical Informatics and Decision Making 2012, 12:51  doi:10.1186/1472-6947-12-51

Published: 8 June 2012



The purpose of this paper is to examine the acceptability, feasibility, reliability and validity of a new decision quality instrument that assesses the extent to which patients are informed and receive treatments that match their goals.


Cross-sectional mail survey of recent breast cancer survivors, providers and healthy controls and a retest survey of survivors. The decision quality instrument includes knowledge questions and a set of goals, and results in two scores: a breast cancer surgery knowledge score and a concordance score, which reflects the percentage of patients who received treatments that match their goals. Hypotheses related to acceptability, feasibility, discriminant validity, content validity, predictive validity and retest reliability of the survey instrument were examined.


We had responses from 440 eligible patients, 88 providers and 35 healthy controls. The decision quality instrument was feasible to implement in this study, with low missing data. The knowledge score had good retest reliability (intraclass correlation coefficient = 0.70) and discriminated between providers and patients (mean difference 35%, p < 0.001). The majority of providers felt that the knowledge items covered content that was essential for the decision. Five of the 6 treatment goals met targets for content validity. The five goals had moderate to strong retest reliability (0.64 to 0.87). The concordance score was 89%, indicating that a majority had treatments concordant with that predicted by their goals. Patients who had concordant treatment had similar levels of confidence and regret as those who did not.


The decision quality instrument met the criteria of feasibility, reliability, discriminant and content validity in this sample. Additional research to examine performance of the instrument in prospective studies and more diverse populations is needed.