Skip to main content
  • Research article
  • Open access
  • Published:

Development of the PRE-HIT instrument: patient readiness to engage in health information technology

Abstract

Background

Technology-based aids for lifestyle change are becoming more prevalent for chronic conditions. Important “digital divides” remain, as well as concerns about privacy, data security, and lack of motivation. Researchers need a way to characterize participants’ readiness to use health technologies. To address this need, we created an instrument to measure patient readiness to engage with health technologies among adult patients with chronic conditions.

Methods

Initial focus groups to determine domains, followed by item development and refinement, and exploratory factor analysis to determine final items and factor structure. The development sample included 200 patients with chronic conditions from 6 family medicine clinics. From 98 potential items, 53 best candidate items were examined using exploratory factor analysis. Pearson’s Correlation for Test/Retest reliability at 3 months.

Results

The final instrument had 28 items that sorted into 8 factors with associated Cronbach’s alpha: 1) Health Information Need (0.84), 2) Computer/Internet Experience (0.87), 3) Computer Anxiety (0.82), 4) Preferred Mode of Interaction (0.73), 5) Relationship with Doctor (0.65), 6) Cell Phone Expertise (0.75), 7) Internet Privacy (0.71), and 8) No News is Good News (0.57). Test-retest reliability for the 8 subscales ranged from (0.60 to 0.85).

Conclusion

The Patient Readiness to Engage in Health Internet Technology (PRE-HIT) instrument has good psychometric properties and will be an aid to researchers investigating technology-based health interventions. Future work will examine predictive validity.

Peer Review reports

Background

Consumers are turning more to the internet for health information [1]. Online and mobile health interventions to aid lifestyle change and chronic condition self-management are proliferating [24]. However, important “digital divides” such as age, education, and rural residence still exist and may limit consumer use of these tools [5]. Concerns about data security, privacy, and lack of motivation may also limit use [6]. Researchers and developers of online and mobile tools may want to assess not only the skills of prospective target patient populations, but also their motivations and concerns, which can be encapsulated in the term “readiness”. Researchers would benefit from an instrument that characterized their research participants’ likelihood of using these technology applications.

There has been previous instrument development in this area. Norman and Skinner developed a 10-item scale, the eHealth Literacy Scale (eHEALS), to measure the eHealth literacy concept. The scale prompts participants to evaluate their own abilities to search for, use, and evaluate health resources on the internet [7, 8]. Although this was an important first attempt to measure the concept of eHealth Literacy, it has several limitations. First, the researchers developed the scale with a youthful sample ranging in age from 13 to 21 years. No data exist on its performance in older adults, which is an important limitation, considering that internet and computer skills will likely differ between these two populations. Additionally, a Dutch version of the eHEALS failed to predict internet health use [9]. Lastly, an instrument that goes beyond literacy to measure readiness may be more useful to researchers.

Self-management is an important component of chronic disease management and it is thought that interactive online interventions might engage and support patients to better self-manage [10]. But these tools can only help if patients are ready to use them. Therefore, we developed an instrument designed to go beyond basic eHealth Literacy and computer skills to measure a readiness to use internet resources to access health information. Unlike the eHEALS, we included concepts such as information needs, motivations, privacy concerns, and preferred source of information. Also, we particularly focused on patients with chronic conditions as they tend to be older, a factor associated with decreased internet use [5]. Focusing on those with chronic conditions is important because many of the health information technology interventions are being developed for people with chronic conditions.

Methods

Table 1 provides an overview of the multiple and iterative methods used in our instrument development process, and the sample size used for each step.

Table 1 Instrument development activities

Identifying domains

To create candidate items, we first sought to understand to the relevant domains. To do this, we conducted four focus groups with 16 patients with the chronic conditions of diabetes, hypertension, heart failure, or coronary artery disease. Separate focus groups with 2–6 participants each were run for self-identified internet users and non-users as it was hypothesized that their issues with technology use would be different. Focus group participants were asked where they got information about their health, internet use, concerns about using the internet for health, information and communication preferences, and past experiences. Focus groups were audio recorded and transcribed by an experienced qualitative transcriptionist. Transcripts were analyzed using grounded theory methodology with investigator consensus on codes and themes, assisted by NVivo 8.0 software. Investigators were RJK a family physician clinical researcher, SMC an MPH experienced in qualitative methods, and JAS a medical student trained by the team in focus group and qualitative methods. All three participated in focus group facilitation and field note recording. RJK and SMC conducted analysis of the focus group data. Both independently coded the transcripts, and then met to agree on codes. Major themes emerged, which then informed the item-writing to capture that domain. Some items were near direct quotes from participants.

Creating candidate items

Once relevant domains were identified from the focus group themes, we searched the literature for instruments addressing these domains. We identified instruments that addressed internet use [11], internet use for health purposes [12, 13], computer and internet anxiety [14], computer and internet abilities [1518], attitudes and beliefs about the internet [14, 1921], risk perception [2228], internet security and privacy [6], health literacy [7, 2932], motivation [33, 34], and media literacy [7]. While items were not culled from these instruments, reviewing them allowed us to examine different approaches to relevant domains and aided in our overall task of item writing. Item writing was also informed by the focus group themes, including some items that were near direct quotes of participants. Items were written only in the English language.

Of 98 candidate items, we selected the 53 best items, eliminating those that were double-barreled, lengthy, or had large words or complex sentence structure, while maintaining coverage of our identified domains. We avoided jargon, value-laden words, negatively worded questions and negative prefixes, all of which can decrease an item’s validity coefficient [35, 36]. We also ensured that every question could be meaningfully answered by both internet users and non-users. Questions are also not specific for any disease and more generally reference “health”. These 53 items made up our candidate item questionnaire. We used a 4-point Likert scale for all items with anchors Strongly Disagree, Disagree, Agree, and Strongly Agree [37]. Items which conceptually might be negatively associated with health information technology use were mixed in with positively associated items to decrease rote responding; these were then scored in reverse order [38].

To ensure that respondents interpreted questions as intended, we administered the 53-item questionnaire to 21 participants in person. We used cognitive interviewing during questionnaire completion, asking them to verbalize their interpretation of the questions as well as the thoughts that led to their answers [35]. The questionnaire was iteratively refined until participants no longer expressed any ambiguity about the meaning of items. During this process a 4-point Likert scale with questions framed as hypothetical scenarios (e.g. If I went on the internet, I would find it frustrating.) was preferable to a 5-point Likert that included a neutral response option. The cognitive interviewing process revealed that participants used the neutral option as a catch-all. The 53-item questionnaire was then administered to 200 additional participants. Information about age, general health status (self-rated), highest level of education, and race/ethnicity was also collected from each participant. Five questionnaires that had more than five skipped questions were omitted for a final development set sample size of 195.

Sample

All participants for each phase of the instrument development were patients age 18 years and older with the chronic conditions of diabetes, hypertension, heart disease, or heart failure. Patients were all ambulatory patients attending one of 6 family medicine clinics of the Department of Family and Community Medicine at the University of Missouri. Patients were recruited from the waiting room of the clinic, and were asked to answer a screening questionnaire that ascertained if they had a chronic condition. The researcher made efforts to approach every person in the waiting room and made no assumptions about eligibility or experience using internet/computers. We limited participants to those who primarily speak English. The same recruitment method was used for both the focus groups and questionnaire completion. The University of Missouri Health Sciences Institutional Review Board approved all phases of this study.

Given the multiplicity of analyses employed in the development of a new instrument it is difficult to derive a priori sample size estimates. We approached the sample size issue by estimating the sample required for an exploratory factor analysis (EFA). As a starting point, we posited that the propensity to use health information technology is composed of the five factors Capability, Access, Motivation/Risk Perception, Information Needs, and Privacy/Trust. We further conjectured an initial screening step would reduce the number of candidate items to between 40 and 50. We also made the worst-case assumption that items comprising these five factors would exhibit weak communalities (percent of variance in the observed items explained by the factor model). Under these assumptions, guidelines indicate that a minimum sample size of 130 subjects would be sufficient for the factor analysis [39]. To allow for possibility of retaining fewer items or deriving more factors the sample size was inflated to 200 subjects.

Factor analysis

The first quantitative analysis was to examine item response frequencies for the 53 items in the development sample. Items with insufficient variation across response options were considered for deletion. Such items do not discriminate between different levels of the trait they are intended to measure. After item culling we conducted an EFA to determine the factor structure of the instrument. One can use factor analysis in either an exploratory or confirmatory mode, but since this is a new tool we began with exploratory techniques. With EFA we do not specify the number of factors or the items that load on those factors in advance but rather let the data guide us [40].

An essential step in an EFA is to determine the number of factors to retain. Determining the number of factors is both a matter of judgment on the content and quality of the factors, and a statistical issue. By convention one retains factors with an eigen value of greater than 1.0. This criterion is motivated by the fact that EFA operates on the item correlation matrix wherein item variances are fixed at 1.0, and so factors with a variance (eigen value) less than that of a single item are essentially comprised of noise. The initial factoring extracted 13 factors, however several of the factors were comprised of few items, items that cross-loaded with other factors, or that were difficult to interpret. Final solutions were derived after a Promax oblique rotation. Items with factor loadings of less than 0.30 or with substantial loadings on more than one factor were excluded from the instrument. From the initial pattern matrix of 13 factors, we identified 8 strong factors. We selected groups of items in each factor that loaded most heavily on that factor, and with minimal or no dual loading on other factors. The investigators examined the items in each factor and agreed on a name for the overall factor concept. Each item was assessed and was kept if it had a high loading, was a good conceptual representation of the named dimension, and was well worded and clearly measured the variable of interest. The EFA was re-run with the candidate items to achieve the final instrument. We examined the internal consistency of each factor using Cronbach’s alpha [35, 41].

To investigate the possibility that our 8 identified factors clustered into one or more higher order factors, we examined a scree plot of eigen values of the augmented correlation matrix against the number of factors. Factors with eigen values > 1 are candidates for higher order factors. Potential higher order factor solutions were examined using exploratory methods to examine whether the eight factors represent different constructs or whether they reflect different facets of multiple higher order concepts. The preferred way to do this is using confirmatory factor analysis (CFA) techniques, however this analysis had convergence issues, possibly reflecting that the sample size may not be sufficient to support a CFA model with 28 observed variables in 8 factors. Therefore the less demanding EFA method was again used, this time where the raw data is the summated factor scores.

Test/retest reliability

Three months after the initial sample, we mailed retest questionnaires (full 53 item questionnaire) to a randomly selected sample of 53 participants. 2 declined. Of the 51 remaining, 45 returned completed questionnaires. The eight factors were compared in the initial and retest using Pearson’s correlations.

Results

Focus group participants were 12 women and 4 men. Twelve self-identified as computer and internet users and 4 were non-users. Their average age was 56 (range 27–75). The percentage of non-internet users participating in the focus groups was similar to the percentage among patients that were approached to participate (25 vs. 32%). The major themes identified in the focus groups with users and non-users revolved around barriers and facilitators to health information technology resource use. Barriers and facilitators are presented in Table 2, with definitions and supporting quotes. One theme, asynchrony, was viewed by some as a facilitator and by others as a barrier.

Table 2 Focus group themes

Both users and non-users had privacy and security concerns about sharing information online, however users were more sophisticated in their assessment of these potential risks, while non-users had a more global, unfocused concern, almost paranoia, about these risks, likely representing their more limited understanding of the structure and function of the internet. Anxiety about health concerns seemed to work in two ways for the participants. Searching for information was a way to understand better and perhaps relieve information needs and therefore anxiety. However many participants expressed a “no news is good news” approach, reflecting that seeking information on the internet could lead them to more information than they needed, and that some of that information would be distressing. Both groups also expressed that it was easy to become overwhelmed in the sheer amount of information that could be found on the internet, and for the propensity to go “round and round” in their search for information. Looking for information on behalf of others was definitely a prominent activity, as has been found in other literature [42]. Perhaps one of the most desired aims, several users stated that the internet helped them understand their condition more so that they could ask better questions during their visit and become more active participants in their own care.

Two hundred participants with chronic conditions completed the 53-item questionnaire plus demographic questions. The mean age of the sample was 54 years (s.d. 14 years) with a range of 20–86 years. Other demographic characteristics of the sample are listed in Table 3. We had a 64% participation rate among eligible participants approached; some of the eligible participants felt too ill to participate while in the waiting room of the clinic.

Table 3 Demographic characteristics of patient sample for questionnaire (n = 200)

Of the original 53 items, 28 items were retained which sorted into 8 factors in the EFA. The content experts on our team (DRM, RJK, and SMC) were easily able to name the factors based on underlying concepts, supporting construct validity. The items and Cronbach’s alpha for each of the 8 factors are listed below. Test-retest reliability for the 8 subscales ranged from 0.60 to 0.85.

List of items and Cronbach’s alpha for the 8 factors

Health Information Need - HIN (0.84)

If I went on the internet, I would use it to look up things so that I wouldn’t worry about them anymore.

If I went on the internet, I would use it to look up information about herbals and/or supplements.

If I went on the internet I would use it to look up symptoms.

If I went on the internet I would use it to search for information about my health.

If I went on the internet I would use the internet to find information about medications.

Computer/Internet Experience, Expertise – CIEE (0.87)

If I went on the computer, I would be able to figure out most computer problems that I might run into.

If I went on the computer, I would have access to the internet.

If I went on the internet, I would find using the internet to be easy.

If I went on the internet, I would find using email to be easy.

Computer Anxiety – CA (0.82)

If I went on the computer, I would find using it to be frustrating.

If I went on the internet, I would get frustrated with the amount of information I found about health on the internet.

If I went on the internet, I would find searching for information on the internet would be stressful.

If I went on the internet, I would find sorting through information on the internet to be too time consuming.

Preferred Mode of Interaction – PMI (0.73)

Looking up health concerns on the internet is more convenient for me than contacting a doctor’s office.

I prefer calling my doctor’s office to emailing them.

I email my doctor.

I trust the internet as a source for health information.

Looking up information online about medications is easier than asking my doctor.

Relationship with Doctor – RWD (0.65)

I let my doctor handle the details of my health.

Doctors are my most trusted source of health information.

When I have a health concern, my first step is to contact my doctor’s office.

Cell Phone Expertise – CPE (0.75)

I go online using my cell phone.

I use my cell phone to text people almost every day.

Internet Privacy Concerns – IPC (0.71)

If I went on the internet, I would be very concerned about giving any personal information.

If I went on the internet, I would be concerned it would lead to invasions of my privacy.

No News is Good News - NNGN (0.57)

People today want to know too much about their health.

Regarding my health, I agree with the statement “No news is good news.”

I am concerned about what I might find if I look up health issues on the internet.

Examination of a scree plot of eigen values of the augmented correlation matrix against the number of factors revealed the possibility of 2 or 3 higher order factors (Figure 1). Two and three higher order factor solutions were examined using exploratory methods to examine whether the eight factors represent different constructs or whether they reflect different facets of multiple higher order concepts. The three factor solution was problematic with multiple loadings for factors, while the 2 factor solution, Table 4, was both conceptually and analytically unambiguous. Two “meta-factors” were extracted which conceptually represent “Facilitators” and “Barriers” to health information technology use.

Figure 1
figure 1

Scree plot of higher order structure.

Table 4 Factor pattern for retaining 2 oblique factors

Discussion

The PRE-HIT instrument is a valid instrument to measure likelihood of using health information technology resources among patients with chronic conditions. It addresses using information technology both to search for information and to communicate with the health care team. The instrument demonstrated good test-retest reliability. Its 8 subscales have good construct validity and robust factor loadings. The 8 subscales clustered into 2 larger meta-factors, “Facilitators” and “Barriers”, again, with good construct validity.

There is a good match of the items and factors to the themes identified in the focus groups, reflecting good coverage of the identified domains. Domains and factors include not just measures of computer and internet ability and media literacy that have been addressed by previous instruments, but also user preference for mode of interaction and motivation and desire to search for information.

Women are over-represented in the sample, which likely reflects our strategy of recruiting from our clinic waiting rooms; women make up substantially more than half of all ambulatory care visits, especially as age increases [43]. Women are also the most frequent users of the internet as a health information source, perhaps resulting from their frequent roles as caretakers for children and aging parents [42, 44]. The percentages of each race in our sample are similar to the percentages in the United States population, which should aid generalizability [45]. However, Latino ethnicity is under-represented in our sample, so this is also an area for future work.

We limited items to English language and enrolled only participants who spoke English as their primary language. Many of the measure’s domains (e.g. trust, privacy issues) likely have a cultural context far beyond a simple translation and back translation of items. Validating this work for use in other languages and cultures would likely need to examine this cultural context and cultural specificity, in addition to a linguistic translation. Translation and validation in languages other than English is a potential area for future work.

While some factors had a very robust Cronbach’s alpha, others were more toward the low end of acceptable alpha levels. Cronbach’s alpha is very sensitive to the number of items in a factor, and we made the decision to keep the item number small to minimize potential burden on future research participants, perhaps with implications for each factor’s alpha level [41].

The PRE-HIT instrument builds upon groundwork laid by the eHEALS [7]. While the eHEALS was developed in a young population, the PRE-HIT was developed with older adults with chronic conditions. This is a key demographic target of online and mobile health and lifestyle self-management tools. The PRE-HIT instrument also goes beyond the computer skills and media literacy components of the eHEALS to examine factors such as motivation, information needs, privacy concerns, and user preference for mode of interaction [7, 1113, 1518]. The PRE-HIT instrument will likely be better suited to assess readiness among older adults with chronic conditions. This may help to bridge the gap in predicting use that was found with a Dutch examination of the predictive validity of the eHEALS [9].

Conclusions

Frequently those who are developing and testing new internet and technology based interventions need to enroll patients to test these tools. However, a recurring question is who to enroll, and how to know if the participant is capable of using the technology, and also if they are likely to use it. The PRE-HIT instrument can help researchers choose appropriate test participants. It can also be used to assess a user’s readiness to use the technology and can therefore assist researchers in their statistical analyses evaluating these tools, especially in analyses examining use.

Future work will examine the predictive validity of the instrument, i.e. its ability to predict use of these health technology resources for patients with chronic conditions. This next step will define PRE-HIT scores that are likely to predict use and non-use. Examination of the subscales may also show why a participant uses the technology, or not. Confirming the factor structure and particularly the second order structure is also a target of future research. Additionally, the PRE-HIT instrument is largely suited for addressing computer, internet, and mobile technology use. As technologies evolve, the instrument may need to be modified to address different ways of using technology to improve and inform personal health.

While it would be unrealistic to expect the 28 item PRE-HIT instrument to be used in clinical practice, it will be a great aid to researchers who are examining emerging technologies assist patients with lifestyle change and chronic disease self-management. Currently, it is difficult to determine who to enroll in studies of these technologies and also difficult to characterize the sample beyond simple demographic information. The PRE-HIT instrument will allow investigators to enroll based on specified criteria and to better describe their sample and analyze their results.

Factor names

HIN: Health information need; CIEE: Computer/internet experience, expertise; CA: Computer anxiety; PMI: Preferred mode of interaction; RWD: Relationship with doctor; CPE: Cell phone expertise; IPC: Internet privacy concerns; NNGN: No news is good news.

Abbreviations

CFA:

Confirmatory factor analysis

EFA:

Exploratory factor analysis

eHEALS:

eHealth Literacy Scale

PRE-HIT:

Patient readiness to engage in health information technology

HIN:

Health information need

CIEE:

Computer/internet experience, expertise

CA:

Computer anxiety

PMI:

Preferred mode of interaction

RWD:

Relationship with doctor

CPE:

Cell phone expertise

IPC:

Internet privacy concerns

NNGN:

No news is good news.

References

  1. Pew Internet: health. http://www.pewinternet.org/topics/Health,

  2. McMahon GT, Gomes HE, Hickson HS, Hu TM, Levine BA, Conlin PR: Web-based care management in patients with poorly controlled diabetes. Diabetes Care. 2005, 28: 1624-1629. 10.2337/diacare.28.7.1624.

    Article  PubMed  PubMed Central  Google Scholar 

  3. Allen M, Iezzoni LI, Huang A, Huang L, Leveille SG: Improving patient-clinician communication about chronic conditions: description of an internet-based nurse E-coach intervention. Nurs Res. 2008, 57: 107-112. 10.1097/01.NNR.0000313478.47379.98.

    Article  PubMed  Google Scholar 

  4. Lorig KR, Ritter PL, Laurent DD, Plant K: Internet-based chronic disease self-management: a randomized trial. Med Care. 2006, 44: 964-971. 10.1097/01.mlr.0000233678.80203.c1.

    Article  PubMed  Google Scholar 

  5. Kruse RL, Koopman RJ, Wakefield BJ, Wakefield DS, Keplinger LE, Mehr DR: Internet use by primary care patients: where is the digital divide?. Fam Med. 2012, 44: 342-347.

    PubMed  Google Scholar 

  6. Malhotra NK, Kim SS, Agarwal J: Internet users’ information privacy concerns (IUIPC): the construct, the scale, and a causal model. Inf Syst Res. 2004, 15: 336-355. 10.1287/isre.1040.0032.

    Article  Google Scholar 

  7. Norman CD, Skinner HA: eHEALS: the eHealth Literacy Scale. J Med Internet Res. 2006, 8: e27-10.2196/jmir.8.4.e27.

    Article  PubMed  PubMed Central  Google Scholar 

  8. Norman CD, Skinner HA: eHealth literacy: essential skills for consumer health in a networked world. J Med Internet Res. 2006, 8: e9-10.2196/jmir.8.2.e9.

    Article  PubMed  PubMed Central  Google Scholar 

  9. van der Vaart R, Van Deursen AJ, Drossaert CH, Taal E, Van Dijk JA, van de Laar MA: Does the eHealth Literacy Scale (eHEALS) measure what it intends to measure? Validation of a Dutch version of the eHEALS in two adult populations. J Med Internet Res. 2011, 13: e86-10.2196/jmir.1840.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Wagner EH, Austin BT, Davis C, Hindmarsh M, Schaefer J, Bonomi A: Improving chronic illness care: translating evidence into action. Health Aff (Millwood). 2001, 20: 64-78. 10.1377/hlthaff.20.6.64.

    Article  CAS  Google Scholar 

  11. Czaja SJ, Charness N, Fisk AD, Hertzog C, Nair SN, Rogers WA, Sharit J: Factors predicting the use of technology: findings from the Center for Research and Education on Aging and Technology Enhancement (CREATE). Psychol Aging. 2006, 21: 333-352.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Brodie M, Flournoy RE, Altman DE, Blendon RJ, Benson JM, Rosenbaum MD: Health information, the Internet, and the digital divide. Health Aff (Millwood). 2000, 19: 255-265. 10.1377/hlthaff.19.6.255.

    Article  CAS  Google Scholar 

  13. Pelletier AL, Sutton GR, Walker RR: Are your patients ready for electronic communication?. Fam Pract Manag. 2007, 14: 25-26.

    PubMed  Google Scholar 

  14. Harrison AW, Rainer RK: An examination of the factor structures and concurrent validities for the computer attitude scale, the computer anxiety rating scale, and the computer-self-efficacy scale. Educ Psychol Meas. 1992, 52: 735-745. 10.1177/0013164492052003024.

    Article  Google Scholar 

  15. Bunz U, Curry C, Voon W: Perceived versus actual computer-email-web fluency. Comput Human Behav. 2007, 23: 2321-2344. 10.1016/j.chb.2006.03.008.

    Article  Google Scholar 

  16. Smith B, Caputi P, Rawstorne P: The development of a measure of subjective computer experience. Comput Human Behav. 2007, 23: 127-145. 10.1016/j.chb.2004.04.001.

    Article  Google Scholar 

  17. Bunz U: The Computer-Email-Web (CEW) Fluency Scale–development and validation. Int J Hum Comput Interact. 2004, 17: 479-506. 10.1207/s15327590ijhc1704_3.

    Article  Google Scholar 

  18. Yaghmaie F: Development of a scale for measuring user computer experience. J Res Nurs. 2007, 12: 185-190. 10.1177/1744987106068353.

    Article  Google Scholar 

  19. Gardner DG, Discenza R, Dukes RL: The measurement of computer attitudes: an empirical comparison of available scales. J Educ Comput Res. 1993, 9: 487-507. 10.2190/DXLM-5J80-FNKH-PP2L.

    Article  Google Scholar 

  20. Woodrow JE: A comparison of four computer attitude scales. J Educ Comput Res. 1991, 7: 165-187. 10.2190/WLAM-P42V-12A3-4LLQ.

    Article  Google Scholar 

  21. Murphy CA, Coover D, Owen SV: Development and validation of the computer self-efficacy scale. Educ Psychol Meas. 1989, 49: 893-899. 10.1177/001316448904900412.

    Article  Google Scholar 

  22. Swift JA, Glazebrook C, Macdonald I: Validation of a brief, reliable scale to measure knowledge about the health risks associated with obesity. Int J Obes (Lond). 2006, 30: 661-668. 10.1038/sj.ijo.0803165.

    Article  CAS  Google Scholar 

  23. Hudon C, Fortin M, Vanasse A: Cumulative Illness Rating Scale was a reliable and valid index in a family practice context. J Clin Epidemiol. 2005, 58: 603-608. 10.1016/j.jclinepi.2004.10.017.

    Article  CAS  PubMed  Google Scholar 

  24. Parmelee PA, Thuras PD, Katz IR, Lawton MP: Validation of the cumulative illness rating scale in a geriatric residential population. J Am Geriatr Soc. 1995, 43: 130-137.

    Article  CAS  PubMed  Google Scholar 

  25. Parkerson GR, Broadhead WE, Tse CK: The Duke Severity of Illness Checklist (DUSOI) for measurement of severity and comorbidity. J Clin Epidemiol. 1993, 46: 379-393. 10.1016/0895-4356(93)90153-R.

    Article  PubMed  Google Scholar 

  26. Moss-Morris R, Weinman J, Petrie KJ, Horne R, Cameron LD, Buick D: The revised illness perception questionnaire (IPQ-R). Psychol Health. 2002, 17: 1-16. 10.1080/08870440290001494.

    Article  Google Scholar 

  27. Weinman J, Petrie KJ, Moss-Morris R, Horne R: The illness perception questionnaire: a new method for assessing the cognitive representation of illness. Psychol Health. 1996, 11: 431-445. 10.1080/08870449608400270.

    Article  Google Scholar 

  28. Wrench JS: The influence of perceived risk knowledge on risk communication. Commun Res Rep. 2007, 24: 63-70. 10.1080/08824090601128182.

    Article  Google Scholar 

  29. Parker RM, Baker DW, Williams MV, Nurss JR: The test of functional health literacy in adults: a new instrument for measuring patients’ literacy skills. J Gen Intern Med. 1995, 10: 537-541. 10.1007/BF02640361.

    Article  CAS  PubMed  Google Scholar 

  30. Wallace LS, Rogers ES, Roskos SE, Holiday DB, Weiss BD: Brief report: screening items to identify patients with limited health literacy skills. J Gen Intern Med. 2006, 21: 874-877. 10.1111/j.1525-1497.2006.00532.x.

    Article  PubMed  PubMed Central  Google Scholar 

  31. Davis TC, Long SW, Jackson RH, Mayeaux EJ, George RB, Murphy PW, Crouch MA: Rapid estimate of adult literacy in medicine: a shortened screening instrument. Fam Med. 1993, 25: 391-395.

    CAS  PubMed  Google Scholar 

  32. Davis TC, Crouch MA, Long SW, Jackson RH, Bates P, George RB, Bairnsfather LE: Rapid assessment of literacy levels of adult primary care patients. Fam Med. 1991, 23: 433-435.

    CAS  PubMed  Google Scholar 

  33. Prochaska JO, DiClemente CC: Stages and processes of self-change of smoking: toward an integrative model of change. J Consult Clin Psychol. 1983, 51: 390-395.

    Article  CAS  PubMed  Google Scholar 

  34. Biener L, Abrams DB: The contemplation ladder: validation of a measure of readiness to consider smoking cessation. Health Psychol. 1991, 10: 360-365.

    Article  CAS  PubMed  Google Scholar 

  35. Streiner DL, Norman GR: Selecting the items. Health measurement scales: a practical guide to their development and use. 1995, New York: Oxford University Press, 54-68. 2

    Google Scholar 

  36. Holden RR, Fekken GC, Jackson DN: Structured personality test item characteristics and validity. J Res Pers. 1985, 19: 386-394. 10.1016/0092-6566(85)90007-8.

    Article  Google Scholar 

  37. Likert R: A technique for the measurement of attitudes. Arch Psychol. 1932, 22: 1-55.

    Google Scholar 

  38. Streiner DL, Norman GR: Biases in responding. Health measurement scales: a practical guide to their development and use. 1995, New York: Oxford University Press, 69-84. 2

    Google Scholar 

  39. Mundfrom DJ, Shaw DG, Tian LK: Minimum sample size recommendations for conducting factor analyses. Int J Test. 2005, 5: 159-168. 10.1207/s15327574ijt0502_4.

    Article  Google Scholar 

  40. Fabrigar LR, Wegener DT, MacCallum RC, Strahan EJ: Evaluating the use of exploratory factor analysis in psychological research. Psychol Methods. 1999, 4: 272-299.

    Article  Google Scholar 

  41. Cronbach LJ, Warrington WG: Time-limit tests: estimating their reliability and degree of speeding. Psychometrika. 1951, 16: 167-188. 10.1007/BF02289113.

    Article  CAS  PubMed  Google Scholar 

  42. Rice RE: Influences, usage, and outcomes of Internet health information searching: multivariate results from the Pew surveys. Int J Med Inform. 2006, 75: 8-28. 10.1016/j.ijmedinf.2005.07.032.

    Article  PubMed  Google Scholar 

  43. National Ambulatory Medical Care Survey: 2010 summary tables. 2010, http://www.cdc.gov/nchs/data/ahcd/namcs_summary/2010_namcs_web_tables.pdf,

  44. Houston TK, Allison JJ: Users of Internet health information: differences by health status. J Med Internet Res. 2002, 4: E7-10.2196/jmir.4.2.e7.

    Article  PubMed  PubMed Central  Google Scholar 

  45. USA QuickFacts from the US Census. http://quickfacts.census.gov/qfd/states/00000.html,

Pre-publication history

Download references

Acknowledgements

This project was supported by grant number K08HS017948 from the Agency for Healthcare Research and Quality. The content is solely the responsibility of the authors and does not necessarily represent the official views of the Agency for Healthcare Research and Quality.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Richelle J Koopman.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

RJK conceived of the study, participated in its design and coordination and drafted the manuscript. GFP participated in the design and performed the statistical analysis. SMC participated in the design of the study, and the analysis and interpretation of data. JAS participated in the analysis and interpretation of qualitative data. DRM participated in the design of the study, and the analysis and interpretation of the data. All authors provided critical review of manuscript drafts, including the final manuscript. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article

Koopman, R.J., Petroski, G.F., Canfield, S.M. et al. Development of the PRE-HIT instrument: patient readiness to engage in health information technology. BMC Fam Pract 15, 18 (2014). https://doi.org/10.1186/1471-2296-15-18

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1471-2296-15-18

Keywords