Skip to main content

From theory to 'measurement' in complex interventions: Methodological lessons from the development of an e-health normalisation instrument

Abstract

Background

Although empirical and theoretical understanding of processes of implementation in health care is advancing, translation of theory into structured measures that capture the complex interplay between interventions, individuals and context remain limited. This paper aimed to (1) describe the process and outcome of a project to develop a theory-based instrument for measuring implementation processes relating to e-health interventions; and (2) identify key issues and methodological challenges for advancing work in this field.

Methods

A 30-item instrument (Technology Adoption Readiness Scale (TARS)) for measuring normalisation processes in the context of e-health service interventions was developed on the basis on Normalization Process Theory (NPT). NPT focuses on how new practices become routinely embedded within social contexts. The instrument was pre-tested in two health care settings in which e-health (electronic facilitation of healthcare decision-making and practice) was used by health care professionals.

Results

The developed instrument was pre-tested in two professional samples (N = 46; N = 231). Ratings of items representing normalisation ‘processes’ were significantly related to staff members’ perceptions of whether or not e-health had become ‘routine’. Key methodological challenges are discussed in relation to: translating multi-component theoretical constructs into simple questions; developing and choosing appropriate outcome measures; conducting multiple-stakeholder assessments; instrument and question framing; and more general issues for instrument development in practice contexts.

Conclusions

To develop theory-derived measures of implementation process for progressing research in this field, four key recommendations are made relating to (1) greater attention to underlying theoretical assumptions and extent of translation work required; (2) the need for appropriate but flexible approaches to outcomes measurement; (3) representation of multiple perspectives and collaborative nature of work; and (4) emphasis on generic measurement approaches that can be flexibly tailored to particular contexts of study.

Peer Review reports

Background

Advancements in new technologies of health and medical care – and in their social organisation - promise to benefit the health and well-being of patients and society. However, getting new technologies into practice beyond the context of research projects that demonstrate the (clinical) efficacy or effectiveness of new practices and procedures remains a problem. Researchers are now investing much effort in understanding and resolving issues of ‘implementation’ in relation to health care interventions and practices, and this is reflected in a fast growing field of ‘implementation science’. Understanding the science behind implementation processes has also become an important concern for healthcare policy and practice. Following Linton [1]:

‘Implementation involves all activities that occur between making an adoption commitment and the time that an innovation either becomes part of the organizational routine, ceases to be new, or is abandoned (…) [and the] behavior of organizational members over time evolves from avoidance or non-use, through unenthusiastic or compliant use, to skilled or consistent use. (p 65)’

There is a vast literature on implementation in service organisations [2], however efforts at implementing new technologies and practices remain problematic. The gap between research evidence and practice remains wide [3], and concerns about the large numbers of ‘pilot’ studies of new interventions that never lead to sustainable services are repeatedly expressed [4]. This is particularly the case for ‘e-health’ technologies - defined as practicing and delivering health care using information and communication technology [5] - despite significant promises for improving health care quality and efficiency [6].

In attempting to address such problems of implementation, the application of theory to designing health care interventions [7], planning and evaluating them [8][9, 10], and developing effective strategies for their implementation [11] offers much potential.

However, obstacles to the use of theory for such purposes are numerous, and include the identification of relevant and useful theoretical perspectives from the huge body of literature on implementation that spans diverse academic disciplines (for example, psychology, sociology, business, healthcare management). Such theoretical diversity includes approaches that emphasise attitudes and behaviours [8, 12, 13]; diffusion and adoption of innovations through social networks [14]; and Science and Technology Studies (STS) approaches [15, 16] that emphasise technology design and its relations with human actors. Reviews such as those of Greenhalgh and colleagues [2] (of literature relevant to the diffusion of innovations in service organisations) and Grol and colleagues [8] (of theories useful for planning and studying initiatives for improving patient care) begin to address this difficulty by mapping the terrain of implementation theories that may be useful for guiding both intervention development and approaches to implementation, and summarising their key processes and emphases.

Advances in theory-based intervention development and implementation have been made particularly with regard to changing healthcare professionals’ behaviour and practice to facilitate the uptake of evidence-based-practice strategies [7, 17]. Drawing on psychological theories of behaviour, Michie and colleagues [17] explicitly set out to develop theory-based explanations of factors that affect professional practice in a format that would be accessible to non-academic users, and associated work has included guidance for designing questionnaires based on the Theory of Planned Behavior [18]. Models focused on psychological theory however, tend to over-emphasise the personal agency of individuals and underplay the importance of context. For example, implementation failures are often attributed to slow behaviour change by professionals, when there are likely to be other good and predictable socio-organisational reasons for such failure [19]. Nonetheless, such approaches show promise in facilitating the uptake of new interventions and/or ways of working, particularly where the roles and actions of individuals in making an implementation ‘effective’ are an appropriate focus for implementation efforts.

We would argue, however, that in practice, many interventions being implemented in healthcare settings are subject to more complex influences than those known to directly affect the behaviour of individuals. New practices get taken up and become ‘workable’ due to a complex interplay between features of the intervention/practice itself, the actions of individuals involved in the process, and aspects of the physical and social environment in which implementation activities are undertaken. Normalization Process Theory (NPT) [20, 21] approaches the problem of implementation with a view to understanding such dynamics. It emphasises the processes by which new technologies and practices become normalised, focusing on the work that this requires of people working both individually and collaboratively. What really matters here is the extent to which new technologies and practices can – and do – become embedded in both the contexts in which they are to be used, and in the everyday practices of the individuals whose work is affected by these innovations. NPT is concerned with the generative processes [22] that underpin three core problems: implementation (bringing a practice or practices into action); embedding, (when a practice or practices may be routinely incorporated into the everyday work of individuals and groups); and integration (when a practice or practices are reproduced and sustained in the social matrices of an organization or institution). In NPT it is postulated that practices become routinely embedded in social contexts as the result of people working, individually and collectively, to enact them, and that the production and reproduction of a practice requires continuous investment by individuals to carry action forward in time and space. There are four sets of processes that characterise different kinds of ‘normalisation work’, and which require particular kinds of investments from individuals and organisations [20, 21]:

Coherence: the process of sense-making and understanding that individuals and organisations have to go through in order to promote or inhibit the routine embedding of a practice to its users. These processes are energized by investments of meaning made by participants.

Cognitive participation: the process that individuals and organisations have to go through in order to enrol individuals to engage with the new practice. These processes are energized by investments of commitment made by participants.

Collective action: the work that individuals and organisations have to do to enact the new practice. These processes are energized by investments of effort made by participants.

Reflexive monitoring: the informal and formal appraisal of a new practice once it is in use, in order to assess its advantages and disadvantages and which develops users’ comprehension of the effects of a practice. These processes are energized by investments in appraisal made by participants.

A considerable body of research now supports NPT as an adequate and useful theory for explaining processes of the normalization of practices associated with complex interventions. This evidence spans diverse settings in which new technologies and practices have been the focus of its application, such as telecare [23], e-health [24, 25], clinical decision support systems [26], teledermatology [27], infertility management [28], maternity services [10]and the management and treatment of depression [29, 30].

The development of structured tools for assessing implementation processes, which take account of this complex interplay between interventions, individual actions, and context, would represent an advance in applying theory to understand and address implementation problems in practice. Existing assessment tools that focus on organisational factors relevant to ‘readiness’ for interventions in healthcare [31–33] do not adequately reflect the complexity of normalisation processes as characterised by the NPT – for example, the dynamic and iterative relationships between the types of work involved in making sense of a new practice, enacting it (collectively) and appraising its outcome and value. They are therefore limited in the extent to which they offer practical ways of facilitating implementation processes in ways that lead to the embedding of new practices within contexts of use.

A further challenge for the development of theory-based measures that capture the complexity of implementation activities concerns the various ways in which outcomes of such activities may be defined. In contrast to psychological theories of implementation behaviour, which focus on explaining and/or quantifying individuals’ uptake of a new practice, NPT focuses on more subtle – and gradual – processes, such as ‘embedding’, ‘integrating’ and ‘normalisation’. NPT does not offer a ‘definition’ of the term ‘normalisation’, for it can be appropriately used to refer to a process or a ‘state’, depending on the context and the frame of reference – that is, for the most part ‘normalisation’ is considered to be an ongoing cycle of activity aimed at making a new practice ‘fit in’ with the work of individuals and their context of practice, but when a practice ceases to be ‘new’ or no longer requires additional effort, it may be framed as having become ‘normalised’. Further work needs to be done to develop ways of defining and measuring outcomes of efforts to implement new practices, that reflect the complexity and context-dependent nature of what it means to have ‘successfully’ or ‘effectively’ implemented a new practice.

Thus the development of structured assessment tools for understanding the complex processes involved in integrating complex interventions, including e-health [34], into practice remains a priority. Recently, theory-based tools for assisting implementers in planning and ‘thinking through’ particular interventions with reference to the social and organisational contexts in which they are to be implemented have been offered [35, 36]. Although promising however, such tools do not provide measurements to be used during implementations to assess progress towards successful implementation (however defined by stakeholders). Such tools would offer the potential to identify (and quantify) problems with an implementation during the process, but so far work in this area remains limited.

The objective of this study then was to advance work on translating theory into structured assessment instruments for research and practical purposes in these contexts, by drawing on the findings of a study [24] that undertook the development and preliminary testing of a Technology Adoption Readiness Scale (TARS) for measuring normalisation processes in the context of e-health service interventions. This paper therefore aims to (1) describe the process and outcome of a project to develop a theory-based instrument for measuring processes involved in the implementation of e-health interventions; and (2) identify key issues and methodological challenges for further advancing work in this field. First however, a fuller explanation of the theoretical development of NPT is required.

Normalization process theory: Theoretical development

NPT was initially developed as an applied theoretical model to assist clinicians and researchers to understand and evaluate the factors that inhibit and promote the routine incorporation of complex healthcare interventions in practice. Since then, it has been developed as a middle-range theory of socio-technical change [20], which characterizes the mechanisms involved in the embedding of practices within their immediate and broader social contexts.

The development of NPT [37] focused on addressing two key criteria for theory to be ‘useful’: that it must be both adequately described and fit for purpose. Thus, the theory has been developed to offer transparent and transferable explanations for the phenomena of interest (processes of embedding new practice and ways of working) revealed by empirical investigation [38, 39]. In doing so, we have followed sociological approaches to theory building [22, 40, 41] to undertake four kinds of conceptual work required to make a theory ‘fit for purpose’: describing, explaining, making knowledge claims, and investigating observed phenomena (see Table 1: Requirements of Theory).

Table 1 Requirements of a Theory (from May et al. 2007[42])

Considerable work has been undertaken to critique NPT in terms of its potential for describing key processes that underpin the success or otherwise of implementation, and to ensure that NPT’s core constructs can be operationalised in a stable and consistent way by multiple user constituencies, including testing out NPT in qualitative studies of a variety of practices and in a diverse range of contexts [10, 23–30]. Recent work has also extended the practical utility of NPT for a wide range of academic and non-academic users. An online ‘users’ manual’ for NPT (http://www.normalizationprocess.org), that provides descriptions, guidance on use of the theory, and applied examples, along with work to frame NPT as a tool for designing, developing and implementing complex interventions[9] and make NPT accessible to diverse user groups who are interested in understanding and solving practical problems of implementation.

The development of good practice for designing and administering structured instruments to assess the processes of normalization described and explicated in the formal specification of the theory is the next step for further extending the utility of NPT. In terms of enhancing the NPT’s ‘fitness for purpose’, this is important for facilitating investigation as a key component of theory (Table 1). The development of NPT derived ‘assessment’ measures would represent a step beyond current work undertaken with NPT to operationalise it as a tool for planning interventions[9, 35], towards exploring investigative questions about the theory’s scope for use in predicting – or more appropriately providing assessment of ‘potential for achieving’[21]– the normalization of complex interventions in practice.

Development of technology adoption readiness scale (TARS)

An instrument development study was undertaken as part of a larger study that used a multi-method approach to understanding barriers to the uptake and integration of e-health into healthcare professionals’ practice [43]. The TARS study aimed to develop a structured instrument to measure processes of normalisation in relation to the routine use of a specific e-health system. As NPT is the basis for the instrument, these normalisation processes are seen to reflect staff perceptions of factors related to the collaborative work required for the normalisation of particular e-Health systems in a given context. The primary purpose of this instrument then was to enable users to quantify a range of processes proposed by the NPT to contribute to the successful normalisation of a new intervention – in this case, e-health. As such, the instrument could be used both by practitioners charged with implementing an e-health intervention (and thus used in a ‘diagnostic’ capacity for identifying and resolving problems early on in an implementation), and by research teams or practitioners undertaking service evaluations (thus as an evaluative tool). Although the ultimate aim of a programme of work we are undertaking on measure development based on NPT is to develop ‘predictive’ tools based on the theory, development of an instrument for this purpose was beyond the scope of this study.

This project was undertaken in two stages, each of which is described here in turn. The first stage was the development of the instrument and the second stage was a preliminary test of the utility of the instrument in two different NHS settings in which staff were using particular e-health systems. The focus of this project was on development rather than the empirical determination of psychometric properties, thus the final discussion in this paper will focus primarily on the processes and experiences of translating empirically derived theoretical constructs into structured tools and the implications of this for undertaking applied assessments in health care settings.

Phase 1: Item development and conceptual validation

In this phase, we aimed to draw on the NPT to develop a comprehensive set of general items –TARS items - reflecting factors affecting the routine use of e-health ready for application in specific settings.

Methods (phase 1)

The first step was understanding the key ‘assumptions’ of NPT and identifying implications and challenges for developing measures based on the theory. Table 2 outlines the key considerations regarding this process, which will be returned to in the discussion. Rather than prescribing specific methodological processes, this preliminary analysis served as a general frame of reference to guide the development of TARS.

Table 2 Key challenges for developing NPT based measures

Item generation

The TARS items were developed using three sources of knowledge about factors that affect the use of e-health: theoretical knowledge as represented by the NPT; empirical knowledge, in the form of findings of a meta-review of e-health being conducted as a related project [24]; and expert knowledge obtained using an expert survey (described below).

At the time the study commenced, we were working with the Normalisation Process Model (NPM)[44], therefore the bulk of the questions developed for inclusion stemmed from the NPT’s ‘Collective Action’ construct (see below for brief descriptions, and elsewhere [20, 42] for accounts of the theory development process). In NPT, the key constructs of NPM remain of central importance but as processes underlying a more general construct of Collective Action that relates firmly to the ‘enactment’ stage of an intervention.

Contextual Integration (CI): the degree to which the proposed e-health system fits (or integrates) with the overall goals and structure of the organisation (context), as well as the capacity of the organisation to undertake the implementation.

Relational integration (RI): the way in which different professional groups relate to each other, and how well the proposed e-health initiative fits (or integrates) with existing relationships, as well as the degree to which it promotes trust, accountability and responsibility in inter-group relationships.

Interactional workability (IW): the degree to which the e-health system enables (or impedes) the work of interactions between health professionals and patients – e.g. a consultation.

Skill set workability (SSW): the degree to which the e-health initiative fits with existing working practices, skill sets, and perceived job role.

Item construction began by translating the theoretical constructs into plain language statements, each of which having a single and comprehensible meaning. For example, the construct of ‘contextual integration’ included the statement that a factor affecting the normalisation of a new technology is ‘…… the extent to which organizational effort is allocated to an e-Health system in proportion to the work that the system is intended to do.’ Such statements were simplified, for example, to ‘sufficient organisational effort has gone into supporting the system’ and ‘the rewards of using the system outweigh the effort’. This process resulted in 23 items for rating which, after critical peer review, were increased to a final set of 27 rating items to be included in the expert survey.

Expert survey

An online survey of experts was conducted to (a) test the face validity of items intended for inclusion in the final item set and (b) collect data about the perceived relative importance of individual items. The 27-item set was pilot-tested as a live link by members of the project advisory group (n = 5), resulting in minor refinements (shown in Table 3). In the survey, participants were asked to rate the importance of each item to the routine use of e-Health, using a scale in which 0 = not at all important; 1 = some importance; 2 = moderate importance; 3 = very important; 4 = extremely important; with the option of choosing 'don't know'.

Table 3 Descriptive analysis of results of Expert survey

The sample was defined as authors of published reviews of e-health, drawing on papers included in the scoping review, and supplemented with additional searching of relevant fields (eg. telecare, telemedicine) to develop a sufficient sampling frame. A database of 308 potential respondents with (unverified) email addresses was produced. Authors were invited via email to take part in the survey, and were sent personalised links for response tracking. Non-responders were sent up to two reminders, approximately 10 days apart.

Results (phase 1)

A total of 63 participants completed the expert survey out of 252 invitations (24% response) that were presumed to be received (subtracting invitations returned as ‘undeliverable’). Sample characteristics are presented in Table 4. Details of ratings for the item set are reported elsewhere [24] (and available as Additional File 1), but in general, items were highly endorsed by the survey participants as important factors affecting the routinisation of e-health systems.

Table 4 Sample characteristics of expert survey participants

Preliminary descriptive analysis was undertaken to make decisions about excluding or combining existing items, analysing each item in terms of (i) the mean rating of importance for that item, and (ii) any correlations between the item and other items in the set (correlations of r > 0.5). The results of this decision analysis are presented in Table 3. Items that were highly correlated with other items were either discarded or re-written into a single item, particularly where importance ratings were relatively low. This process reduced the 27 items to 21.

Participants in the Expert survey were invited to suggest (using free-text) any factors they felt to be particularly important and which they believed had not been covered in the item set. Analysis of these free-text comments made by (n = 31) survey participants resulted in the eventual inclusion of five new items about contextual integration issues (Q.5-9 in Table 5). Peer review (amongst the project team) resulted in further revisions, notably the addition of three items to reflect the NPT’s constructs of coherence, cognitive participation, resulting in a final set of 30 generic TARS items ready for adapting for use in specific contexts.

Table 5 Final set of TARS items

Phase 2: Testing TARS items in specific health contexts

Methods (phase 2)

This phase tested the utility of TARS for assessing normalisation processes in relation to specific e-health systems, using convenience samples in two NHS contexts. These sites were chosen because (i) specific e-Health systems were in use by health professionals, and (ii) the two sites reflected different levels of ‘normalisation’ of e-health. At Site 1, use of the e-health system (community nurses using Personal Digital Assistant technology) was relatively new, and provided an opportunity to use the TARS items in a context where e-health was still in the experimental stages for some users. At Site 2, the entire organisation is based on e-health systems – so staff could be expected to have greater experience of e-Health and over a longer time.

The factor statements developed in Phase 1 were translated into directional statements and given a 7 point response scale eliciting level of agreement in relation to the e-health technology being assessed in that context. The scale of responses was anchored at either end with ‘strongly disagree’ and ‘strongly agree’ with non-labelled interim points. Explanatory text and demographic questions varied slightly between sites. Following the set of TARS rating items, two additional questions were included to assess: (i) participants’ perceptions about whether the system was not at all, partly, or completely in routine use; and (ii) their perceptions about the likelihood of it becoming routine (on a 5 point scale: definitely not; probably not; possibly; probably will; definitely will). Although the complexity of developing outcome ‘measures’ to represent the concept of normalisation has already been noted (and was not the focus of this study), these questions were included to represent perceptions of the current state of normalisation of the e-health technology, for the purpose of exploring the utility of the TARS items that were developed to represent processes contributing to normalisation.

In both sites, the survey was conducted electronically using a commercial provider (http://www.surveymonkey.com). Site contacts facilitated participant recruitment and management of response rates via reminders. At both sites, two reminders were issued following the original invitation (at intervals of 10 – 14 days), which increased response rates. The research team did not have direct access to staff details and email addresses (as our ethical approval for the project did not extend to accessing staff personal details).

Data were analysed descriptively, using frequency tables to visually assess the distributions for ratings elicited using the scales. As responses on individual items were in many cases skewed and non-normally distributed, non-parametric cross-tab analysis with Pearson’s Chi Square statistic was used to explore differences in perceptions relating to TARS items according to perceived level of routinisation of the e-health system. For these analyses, new categorical variables were created by combining rating points. For Site 1, responses to the TARS items were dichotomised into groups indicating non-agreement (responding 0 strongly disagree −3 neutral midpoint) and those responding with various levels of agreement (rating 4–6). At Site 2 (with a larger sample size and different spread of responses), TARS item responses were trichotomised as follows: Disagreement (0–2); neutral or some agreement (3 or 4); and moderate to strong agreement (5 or 6).

Results (phase 2)

At Site 1, 46/243 participants completed the survey (19% response rate). At Site 2, 231/1351 (17% response rate) completed the survey sufficiently for inclusion in the analysis.aIt should be noted that response rates are approximate and conservative, as calculation is based on total number of staff emailed an invitation to participate. These rates do not reflect adjustment for reasons for non-participation such as absence from work, or failed delivery of emails, as such information was not available to the researchers. It should be noted that response rates are approximate and conservative, as calculation is based on total number of staff emailed an invitation to participate. These rates do not reflect adjustment for reasons for non-participation such as absence from work, or failed delivery of emails, as such information was not available to the researchers. Sample characteristics for both sites are presented in Table 6, and Table 7 presents frequencies for the combined categorical variables, to indicate item responses. Tables 8 and 9 present the significant results for the Chi Square analyses for each site respectively, and ‘n’ denotes sample sizes for the different cells within each analysis (which differ from frequencies presented in Table 7 because ‘don’t know’ responses were excluded from these analyses on a per item basis). These analyses indicated that, for a number of items, stronger positive endorsement was indicated by participants who perceived e-health to be routine, thus supporting the NPT. For Site 1, significant differences between groups perceiving e-health as ‘partly routine’ compared with ‘completely routine’ were evident for 12 out of the 30 items. For the these items, the pattern of relationship is such that those who perceived the e-health system to be completely a routine part of their work were more likely to agree than not agree with the statements about the system, or to show a higher proportion within the group responding with agreement (ie overall, they indicated more positive responses). Here, the strongest significant differences occurred on two of the Contextual Integration items ‘this organization has a culture that is supportive of change’ and ‘this e-Health system fits in with the priorities and challenges of our organization’, along with the Coherence item ‘the staff who work here have a shared understanding of what the system is for and how it is to be used’.

Table 6 Sample characteristics for Phase 2 participants (Site 1 and Site 2)
Table 7 TARS items: Frequencies for combined categorical variables
Table 8 Site 1 Chi Square analysis of agreement with TARS items by perception of level of routinisation
Table 9 Site 2 Chi Square analysis of agreement with TARS items by perception of level of routinisation

At Site 2, nine TARS items indicated significant differences in responses between participants perceiving different levels of routinisation (Table 9). These results suggested that compared with those who feel that e-health has already become ‘completely routine’, those for whom it hasn’t become routine were less likely to agree that sufficient organisational effort has gone into supporting the system; and less likely to show strong agreement (rather than being neutral or some agreement) that: e-health is a different way of working; that the organisational culture is supportive of change; that they understand their own accountability and liability; and that there are ongoing mechanisms for monitoring and appraising how e-health is used. The group for whom e-health was not yet a completely routine part of their practice were also more likely to disagree that there is good evidence of clinical effectiveness of the e-health system, and that there is a shared understanding of what the system is for and how it is to be used. Here, the strongest differences between groups were evident on items relating to liability, accountability and appropriateness of skills.

Together, the results from both sites suggest that the ratings made on the instrument items are related to participants’ perceptions of how routinely the e-health systems are being used in their practice contexts.

Discussion

This paper has set out to (1) describe the process and outcome of a project to develop a theory-based instrument for measuring processes involved in the implementation of e-health interventions based on Normalization Process Theory; and (2) identify key issues and methodological challenges for further advancing work in this field.

The practical output of this study was the development of the TARS instrument, which was intended to enable researchers and practitioners to quantify a range of processes proposed by the NPT to contribute to the successful normalisation of e-health, either as a ‘diagnostic’ tool or for evaluation purposes. Developing TARS required considerable ‘translation work’, both in terms of the methodological implications of the theory’s underlying theoretical assumptions (Table 2), and from theoretical constructs into specific questions. To develop a set of assessment items with good face validity, multiple sources of information were collected and utilised including theoretical specifications of NPT (and its underlying empirical basis), the perspectives of academic experts in e-health implementation, and primary qualitative data concerning professionals’ views of implementation and integration of e-health in the NHS[24]. Whilst the expert survey (Phase 1) endorsed the proposed items as reflecting important factors affecting the potential for e-health to become a routine part of working practices (and suggested further items about contextual integration), health professionals themselves indicated greater emphasis on practice-based issues concerning benefits, particularly to patients, and workload management. Representation of different kinds of ‘expertise’ thus ensures that research instruments being developed for use in practice contexts are ‘fit for purpose’. In this study, we were focused primarily on health professionals using e-health in their day to day work, but even within this focus there were important differences between the roles and experiences of staff in relation to the e-health systems we studied, that affected their capacity to answer all of the questions. Although the questions included in the instrument were developed drawing on multiple sources of stakeholder input in general, this finding does raise concerns about the level of face validity achieved for the specific groups within our samples. We suggest that in using an instrument such as TARS, continued work on ensuring face validity of questions at the level of the participants within the local setting of use is required. We must also acknowledge that the results presented in this study are limited by focusing primarily on nurses as a professional group. In other studies, for example, it will also be important to consider assessments from the perspectives of a more diverse range of medical and healthcare professionals, managers and/or implementers [25], or indeed patients [45] . This study thus highlights the collaborative nature of health care work, and the importance of ensuring that multiple-stakeholders’ perspectives [46] are incorporated into the development of tools to assess implementation processes in these contexts.

As one of the first studies to use the NPT in quantitative research, this study aimed to progress work on NPT towards statistical investigation of relationships between implementation processes and outcomes in terms of ‘normalisation’. Although only tests of associations (rather than causality) between normalisation processes and outcomes were possible in this study, ratings of normalisation processes differed between groups holding different perceptions of whether or not the e-Health systems in the respective study sites had become part of routine practice. The two study sites themselves differed – both in terms of the technology being implemented (mobile electronic devices to facilitate community nursing versus computerised decision support services) and the level of progress towards the technology being considered ‘normal’, so differences between them in terms of which items related to perceptions of normalisation would be expected. Although preliminary, however, these finding lend support to assessing the potential predictive value of the TARS in prospective, longitudinal studies. Furthering work on the predictive utility of TARS – and NPT more generally – will however require flexible approaches to identifying and specifying ‘outcome’ measures. The process undertaken in this study demonstrates that ‘normalisation’ is highly context-dependent, relating to the practice itself, the environment in which it is operating, and the different groups of individuals that relate to it. As such, NPT does not provide any particular definition of ‘normalisation’ for the purpose of measurement as an outcome variable for quantitative studies, and designers of studies based on NPT to assess outcomes will need to develop study-specific measures based on what outcomes are relevant, and which are likely to be multiple and include both subjective (self-report) and objective (eg. usage data) measures. For example, just some normalisation ‘outcomes’ that might be considered are: level of use; increasing use over time; amount of shift from one practice to another; disappearance of a previous practice; reported acceptability of a practice; or measures of quality of work stemming from use of the practice. The development of approaches to measuring such outcomes will require not only developing and testing quantitative measures, but also further qualitative investigation about how people make judgements about whether or not a new practice can be considered ‘normalised’, and how that may or may not have happened.

This project aimed to develop a simple structured research instrument that could be used in other contexts. However, the process of considering the many possible ways to frame questions about processes involving change, demonstrated that use of tools such as TARS in other research context will require highly flexible and adaptive approaches to ensure that questions are framed appropriately to reflect the stage of implementation/use of the new technology or practice being studied. Here, we chose to frame questions as likert type statements about the e-health technology of interest and elicit respondents’ agreement with those statements, but in other situations it might be preferable to frame questions in a multitude of ways, such as: eliciting expectations of a technology planned but not yet used, inviting direct comparisons between key aspects of one type of technology/practice against another (eg. ‘X is a better way of working than Y’), or even assessing the perceived impact of the technology/practice over time (eg. ‘The impact of X on [practice] has been....). Although not intended at the outset of this study, the set of TARS items framed as ‘factors’ in the format in which they were presented for eliciting ratings of perceived relative importance (ie without reference to any direction of effect, as presented in Table 3) could be used for the development of research instruments that include questions framed according the specific objective of the study. This consideration may prove challenging for further validation of the TARS items as ‘an instrument’, but also offers a range of opportunities for practical use of the tool in assessing staff perceptions of issues that this study has shown to be important for the normalisation of e-health.

In relation to NPT, the study described in this paper has also contributed to theory development. It has successfully achieved the development of a set of quantitative questions that can be used to assess staff perceptions of processes relevant to the normalisation of e-health with reference to underlying aspects of the constructs within the Collective Action component of NPT, along with single items for assessing perceptions relating to the NPT constructs of coherence, cognitive participation, and reflexive monitoring. This development process, which included gathering and incorporating views from diverse sets of academic and professional stakeholders, challenged our thinking about the constructs, and the multiple interpretations that could be made about their meaning. In part, the processes described here contributed directly to the expansion of the theory from the NPM to the NPT as currently presented (see elsewhere [20, 42] for detailed description). This process has continued beyond this study [36], and is likely to continue as the theory is used, tested and challenged for a variety of purposes.

Despite such limitations, the study offers preliminary support for the conceptual distinctions between and within the constructs of the NPT (particularly with respect to the Collective Action construct), and for the potential predictive potential of items in the instrument with respect to normalisation outcomes (as demonstrated by associations between NPT processes as represented by the TARS items and perceived normalisation of e-health in the contexts of study). Although the TARS instrument does not represent balanced coverage of NPT in its entirety, the key underlying assumptions of the theory as a whole – such as the focus on the collaborative nature of work required of a practice-based intervention – remain constant across the developmental shift from NPM to NPT and thus the methodological challenges and issues described in this paper are of enduring relevance. In relation to the TARS study, the emphasis on the ‘collective action’ component for framing data collection was appropriate, as we were undertaking assessments focused on the ‘enactment’ stage of e-health implementations. However, to further develop the TARS instrument – and to develop measures of NPT that more comprehensively cover the wider frame of implementation activity that spans stages of conceptualising (coherence), engagement of individuals (cognitive participation), and reflection/evaluation (reflexive monitoring) – more longitudinal research will be needed.

This study was focused primarily on instrument development rather than formal validation, however key limitations are worth noting. Despite considerable effort by the research team to maximise response rates, achieved rates were lower than expected. It is difficult to consider the implications of the response rates achieved, as the rates themselves are a ‘worst estimate’, as true response rates (ie in terms of percentages participating out of those who received and read the invitation) could not be calculated due to limited access to information. Reliance on key contacts at survey sites (who were helpful but already working under pressure) also limited the timing and frequency of reminders that could be achieved, and thus the need for greater researcher control over access to research participants must be emphasised. Also, selection of sites for data collection in this study was necessarily pragmatic, and access was negotiated well in advance of the instrument being developed and ready for data collection (as is often the case with applied research). Given that the study sites already had at least some level of adoption of e-Health technology, the scope for prospectively assessing the predictive value of the instrument items in terms of normalisation outcomes was not possible in this study, but should be the objective of further studies where assessment of perceptions can be undertaken prior to, during and after the implementation of a new practice-based initiative. In relation to health technology in general, the challenges of assessing new technologies in practice contexts are recognised [47] but worth emphasising here.

Implications

In highlighting valuable lessons for theory-based instrument development, the study advances knowledge within the field of implementation science. The processes involved in implementing complex interventions are exactly that – complex. NPT has been built from over a decade of observation and analysis of the complex interplay of the structural, organisational, social, and individual factors that affect the ways in which new practices become (or do not become) embedded in routine practices and the contexts in which they are enacted. Such theoretical complexity presents challenges for the development and validation of ‘simple’ measures that can be used generically across contexts that differ qualitatively in ways that reflect the reality of health care service settings. However, the research described in this paper supports the observation of others [7, 8] that this is a challenge that must be embraced as a means of facilitating the effectiveness and uptake of health care interventions in practice.

The findings of this study suggest four key recommendations for developing and assessing theory-derived measures of implementation processes useful for assessing complex healthcare interventions in practice. Firstly, careful consideration must be given to the underlying assumptions of the chosen theory itself, and to the considerable translation and validation work likely to be required (drawing on multiple sources of evidence) for the identification of key concepts and their appropriate expression as simple questionnaire-style items. Secondly, identification – or rather development- of appropriate measures of implementation (or normalisation) ‘outcomes’ is key to the practical utility of theory-derived measures for such assessments, but this is highly context-dependent and thus requires tailored development within specific study (or practice) contexts, for example through conducting preparatory (and qualitative) assessment of what it would mean for a particular intervention to be considered ‘normalised’ within that context. Thirdly, a comprehensive understanding of implementation and normalisation processes in any given context requires adequate multiple perspective assessments that are sensitive to the varied contributions of different professional (or other) groups working individually and collaboratively, and which reflect good understanding of the roles of such individuals and the contexts in which they conduct their work. Finally, we suggest that in undertaking theory-based assessments of this kind, it must be recognised from the outset that approaches to measurement must themselves be ‘fit for purpose’ and as such are unlikely to be achieved entirely by standardised measures developed for use across diverse settings. Thus, consideration should be given to the development of research instruments that come with guidance on how they can be applied flexibly according the objectives of the research study and specific contexts of use [18].

Conclusion

Understanding the processes by which new technologies and practices can become normalised in health care settings – so that we can improve approaches to implementation - remains an important challenge for academics, policy makers, health care managers and practitioners. This study extended work on Normalization process Theory (NPT) towards tests of predictive utility of the theory by developing an instrument to assess normalisation potential in relation to e-health. We suggest that pursuit of the development of generic tools and measures for these purposes – such as the TARS instrument described here - is a useful starting point. However, the practical utility of theory-derived research instruments for measuring implementation and normalisation processes can only be fully realised through research and development activity that is focused on providing guidance for the operationalisation and adaptation of such measures for use in the contextually diverse environments in which health care work is conducted. We suggest that this study represents the beginning of a very complex journey.

Endnote

aIt should be noted that response rates are approximate and conservative, as calculation is based on total number of staff emailed an invitation to participate. These rates do not reflect adjustment for reasons for non-participation such as absence from work, or failed delivery of emails, as such information was not available to the researchers.

References

  1. Linton JD: Implementation research: state of the art and future directions. Technovation. 2002, 22 (2): 65-79. 10.1016/S0166-4972(01)00075-X.

    Article  Google Scholar 

  2. Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O: Diffusion of innovations in service organizations: Systematic review and recommendations. Milbank Quarterly. 2004, 82 (4): 581-629. 10.1111/j.0887-378X.2004.00325.x.

    Article  PubMed  PubMed Central  Google Scholar 

  3. Grol R, Grimshaw J: From best evidence to best practice: effective implementation of change in patients' care. Lancet. 2003, 362 (9391): 1225-1230. 10.1016/S0140-6736(03)14546-1.

    Article  PubMed  Google Scholar 

  4. Wilson P, Petticrew M, Calnan M, Nazareth I: Disseminating research findings: what should researchers do? A systematic scoping review of conceptual frameworks. Implement Sci. 2010, 5 (1): 91-10.1186/1748-5908-5-91.

    Article  PubMed  PubMed Central  Google Scholar 

  5. Pagliari C, Sloan D, Gregor P, Sullivan F, Detmer D, Kahan JP, Wija O, MacGillivray S: What is eHealth(4): A Scoping Exercise to Map the Field. Journal of Medical Internet Research. 2005, 7 (1):

  6. Eysenbach G, Diepgen TL: The role of e-health and consumer health informatics for evidence-based patient choice in the 21st century. Clin Dermatol. 2001, 19 (1): 11-17. 10.1016/S0738-081X(00)00202-9.

    Article  CAS  PubMed  Google Scholar 

  7. Eccles M, Grimshaw J, Walker A, Johnston M, Pitts N: Changing the behavior of healthcare professionals: the use of theory in promoting the uptake of research findings. Journal of Clinical Epidemiology. 2005, 58 (2): 107-112. 10.1016/j.jclinepi.2004.09.002.

    Article  PubMed  Google Scholar 

  8. Grol RPTM, Bosch MC, Hulscher MEJL, Eccles MP, Wensing M: Planning and Studying Improvement in Patient Care: The Use of Theoretical Perspectives. Milbank Quarterly. 2007, 85 (1): 93-138. 10.1111/j.1468-0009.2007.00478.x.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Murray E, Treweek S, Pope C, MacFarlane A, Ballini L, Dowrick C, Finch T, Kennedy A, Mair F, O'Donnell C, et al: Normalisation process theory: a framework for developing, evaluating and implementing complex interventions. BMC Medicine. 2010, 8 (1): 63-10.1186/1741-7015-8-63.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Forster D, Newton M, McLachlan H, Willis K: Exploring implementation and sustainability of models of care: can theory help?. BMC Publ Health. 2011, 11 (Suppl 5): S8-10.1186/1471-2458-11-S5-S8.

    Article  Google Scholar 

  11. Grimshaw JM TR, MacLennan G, Fraser C, Ramsay CR, Vale L, et al: Effectiveness and efficiency of guideline dissemination and implementation strategies.2004;8(6). Health Technol Assess. 2004, 8 (6):

  12. Gagnon MP, Godin G, Gagne C, Fortin JP, Lamothe L, Reinharz D, Cloutier A: An adaptation of the theory of interpersonal behaviour to the study of telemedicine adoption by physicians. International Journal of Medical Informatics. 2003, 71 (2–3): 103-115.

    Article  PubMed  Google Scholar 

  13. Legris P, Ingham J, Collerette P: Why do people use information technology? A critical review of the technology acceptance model. Inf Manag. 2002, 40: 191-204.

    Article  Google Scholar 

  14. Rogers EM: The diffusion of innovation. 1995, New York: Free Press, 4

    Google Scholar 

  15. Webster A: Health, Technology and Society: A Sociological Critique. 2007, Basingstoke: Palgrave Macmillan

    Google Scholar 

  16. Jensen C: Power, Technology and Social Studies of Health Care: An Infrastructural Inversion. Health Care Analysis. 2008, 16 (4): 355-374. 10.1007/s10728-007-0076-2.

    Article  PubMed  Google Scholar 

  17. Michie S, Johnston M, Abraham C, Lawton R, Parker D, Walker A: Making psychological theory useful for implementing evidence based practice: a consensus approach. Quality and safety in health care. 2005, 14 (1): 26-33. 10.1136/qshc.2004.011155.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  18. Francis JJ, Eccles MP, Johnston M, Walker A, Grimshaw J, Foy R: ea: Constructing questionnaires based on the Theory of Planned Behavior - A manual for health services researchers. 2004, Newcastle upon Tyne, England: Centre for Health Services Research Newcastle University

    Google Scholar 

  19. Presseau J, Sniehotta FF, Francis JJ, Campbell NC: Multiple goals and time constraints: perceived impact on physicians' performance of evidence-based behaviours. Implement Sci. 2009, 4: 77-10.1186/1748-5908-4-77. 77

    Article  PubMed  PubMed Central  Google Scholar 

  20. May C, Mair F, Finch T, MacFarlane A, Dowrick C, Treweek S, Rapley T, Ballini L, Ong BN, Rogers A, Murray E, Elwyn G, Légaré F, Gunn J, Montori V: Development of a theory of implementation and integration: Normalization Process Theory. Implement Sci. 2009, 4:

    Google Scholar 

  21. May C, Finch T: Implementing, integrating and embedding practices: an outline of normalization process theory. Sociology. 2009, 43 (3): 535-554. 10.1177/0038038509103208.

    Article  Google Scholar 

  22. Lieberson S, Lynn FB: Barking up the wrong branch: Scientific alternatives to the current model of sociological science. Annu Rev Sociol. 2002, 28: 1-19. 10.1146/annurev.soc.28.110601.141122.

    Article  Google Scholar 

  23. May C, Finch T, Cornford J, Exley C, Gately C, Kirk S, Jenkings KN, Mair FS, Osbourne J, Robinson AL, Rogers A, Wilson R: Integrating Telecare for Chronic Disease Management in the Community: What Needs to be Done?. 2009, London: DoH: Report for the Department of Health Policy Research Programme (PRP)

    Google Scholar 

  24. Mair FMC, Murray E, Finch T, Anderson G, O’Donnell C, Wallace P, Sullivan F: Understanding the Implementation and Integration of E-Health Services. 2009, London: SDO: Report for the NHS Service and Delivery R & D Organisation (NCCSDO)

    Google Scholar 

  25. Murray E, Burns J, May C, Finch T, O'Donnell C, Wallace P, Mair F: Why is it difficult to implement e-health initiatives? A qualitative study. Implement Sci. 2011, 6 (1): 6-10.1186/1748-5908-6-6.

    Article  PubMed  PubMed Central  Google Scholar 

  26. Elwyn GLF, van der Weijden T, Edwards A, May C: Arduous implementation: Does the Normalisation Process Model explain why it's so difficult to embed decision support technologies for patients in routine clinical practice?. Implement Sci. 2008, 3: 57-10.1186/1748-5908-3-57.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Finch T, Mair FS, May CR: Teledermatology in the United Kingdom: Lessons in service innovation. Br J Dermatol. 2007, 156: 521-527. 10.1111/j.1365-2133.2006.07608.x.

    Article  CAS  PubMed  Google Scholar 

  28. Wilkes S, Rubin G: Process evaluation of infertility management in primary care: has open access HSG been normalized?. Primary Health Care Research & Development. 2009, 10: 290-298. 10.1017/S1463423609990168.

    Article  Google Scholar 

  29. Gunn J, Kokanovic R, Palmer V, Potiriadis M, Johnson C: Johnston A-AK, Dowrick C, Griffiths F, Hegarty K, Herrman H et al: Re-organising the care of depression and other related disorders in the Australian Primary Health Care Setting. 2009, Canberra: Australian Primary Health Care Research Institute

    Google Scholar 

  30. Gask LBP, Lovell K, Escott D, Archer J, Gilbody S, Lankshear A, Simpson A, Richards D: What work has to be done to implement collaborative care for depression? Process evaluation of a trial utilizing the Normalization Process Model. Implement Sci. 2010, 5:

    Google Scholar 

  31. Oliver DRP, Demiris G: An assessment of the readiness of hospice organizations to accept technological innovation. Journal of Telemedicine and Telecare. 2004, 10 (3): 170-174. 10.1258/135763304323070832.

    Article  PubMed  Google Scholar 

  32. Lehman WEK, Greener JM, Simpson DD: Assessing organizational readiness for change. J Subst Abus Treat. 2002, 22 (4): 197-209. 10.1016/S0740-5472(02)00233-7.

    Article  Google Scholar 

  33. Snyder-Halpern R: Development and pilot testing of an Organizational Information Technology/systems Innovation Readiness Scale (OITIRS). Proceedings of the AMIA 2002 Annual Symposium. 2002, 702-706.

    Google Scholar 

  34. Preparing for success: readiness models for rural telehealth. http://www.jpgmonline.com/article.asp?issn=0022-3859;year=2005;volume=51;issue=4;spage=279;epage=285;aulast=Jennett;type=0,

  35. Murray EMC, Mair F: Development and formative evaluation of the e-Health Implementation Toolkit (e-HIT). BMC Medical Informatics and Decision Making. 2010, 10 (1): 61-10.1186/1472-6947-10-61.

    Article  PubMed  PubMed Central  Google Scholar 

  36. May C, Finch T, Ballini L, MacFarlane A, Mair F, Murray E, Treweek S, Rapley T: Evaluating complex interventions and health technologies using normalization process theory: development of a simplified approach and web-enabled toolkit. BMC Heal Serv Res. 2011, 11 (1): 245-10.1186/1472-6963-11-245.

    Article  Google Scholar 

  37. May C, Mair FS, Finch T, MacFarlane A, Dowrick C, Treweek S, Rapley T, Ballini L, Ong BN, Rogers A, et al: Development of a theory of implementation and integration: Normalization Process Theory. Implement Sci. 2009, 4 (29):

  38. Hechter M, Horne C: Theory is explanation. In Theories of Social Order. Edited by: Hechter M, Horne C. 2003, Stanford, CA: Stanford University Press, 3-8.

    Google Scholar 

  39. Treweek S: Complex interventions and the chamber of secrets: understanding why they work and why they do not. Journal of the Royal Society of Medicine. 2005, 98 (12): 553-553. 10.1258/jrsm.98.12.553.

    Article  PubMed  PubMed Central  Google Scholar 

  40. Zetterburg H: On theory and verification in sociology. 1962, New York: Bedminster Press, 3

    Google Scholar 

  41. Turner J: Analytical Theorizing. In Social Theory Today. Edited by: Giddens A, Turner J. 1987, Cambridge: Polity Press, 156-194.

    Google Scholar 

  42. May C, Finch T, Mair F, Ballini L, Dowrick C, Eccles M, Gask L, MacFarlane A, Murray E, Rapley T, et al: Understanding the implementation of complex interventions in healthcare: The Normalization Process Model. BMC Heal Serv Res. 2007, 7: 148-10.1186/1472-6963-7-148.

    Article  Google Scholar 

  43. Mair F, May C, Murray E, Finch T, Anderson G, O’Donnell C, Wallace P, Sullivan F: Understanding the Implementation and Integration of E-Health Services. 2009, London: National Co-ordinating Centre for the National Institute for Health Research Service Delivery and Organisation Programme (NCCSDO)

    Google Scholar 

  44. May C: A rational model for assessing and evaluating complex interventions in health care. BMC Heal Serv Res. 2006, 6: 86-10.1186/1472-6963-1186-1186.

    Article  Google Scholar 

  45. Rogers A, Kirk S, Gately C, May CR, Finch T: Established users and the making of telecare work in long term condition management: Implications for health policy. Social Science & Medicine. 2011, 72 (7): 1077-1084. 10.1016/j.socscimed.2011.01.031.

    Article  Google Scholar 

  46. Wagner SM, Rau C, Lindemann E: Multiple Informant Methodology: A Critical Review and Recommendations. Sociological Methods & Research. 38 (4): 582-618.

  47. Mowatt , Cairns , al Be : When and how to assess fast-changing technologies: a comparative study of medical applications of four generic technologies. Health Technology Assessment. 1997, 1 (14): 1-149.

    Google Scholar 

Pre-publication history

Download references

Acknowledgements

We wish to acknowledge the support of key contacts in our two study sites, who facilitated the conduct of the surveys within their organisations, and Dr Tom Chadwick who advised on the statistical analysis. We also wish to thank the reviewers of this manuscript, whose suggestions led to substantial improvements to the paper. We would like to acknowledge the NIHR Service and Delivery Organisation (SDO) for funding the study via project grant 08/1602/135. This article presents independent research commissioned by the National Institute for Health Research (NIHR) SDO programme. The views expressed in this publication are those of the author(s) and not necessarily those of the NHS, the NIHR, or the Department of Health. The NIHR SDO programme is funded by the Department of Health, UK.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tracy L Finch.

Additional information

Competing interests

The authors declare that they have no competing interests

Authors' contributions

TLF conducted data collection and statistical analysis, and drafted the manuscript. TLF, CRM & FSM conceived of the study, TLF coordinated the study, and all other authors participated in its design and helped to draft the manuscript. All authors read and approved the final manuscript.

Electronic supplementary material

12874_2011_781_MOESM1_ESM.docx

Additional file 1: Table S1.Means, Standard Deviations and Frequencies of importance ratings (Expert survey) (DOCX 20 KB)

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Finch, T.L., Mair, F.S., O’Donnell, C. et al. From theory to 'measurement' in complex interventions: Methodological lessons from the development of an e-health normalisation instrument. BMC Med Res Methodol 12, 69 (2012). https://doi.org/10.1186/1471-2288-12-69

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1471-2288-12-69

Keywords