Skip to main content

Assessment of variation in the alberta context tool: the contribution of unit level contextual factors and specialty in Canadian pediatric acute care settings

Abstract

Background

There are few validated measures of organizational context and none that we located are parsimonious and address modifiable characteristics of context. The Alberta Context Tool (ACT) was developed to meet this need. The instrument assesses 8 dimensions of context, which comprise 10 concepts. The purpose of this paper is to report evidence to further the validity argument for ACT. The specific objectives of this paper are to: (1) examine the extent to which the 10 ACT concepts discriminate between patient care units and (2) identify variables that significantly contribute to between-unit variation for each of the 10 concepts.

Methods

859 professional nurses (844 valid responses) working in medical, surgical and critical care units of 8 Canadian pediatric hospitals completed the ACT. A random intercept, fixed effects hierarchical linear modeling (HLM) strategy was used to quantify and explain variance in the 10 ACT concepts to establish the ACT's ability to discriminate between units. We ran 40 models (a series of 4 models for each of the 10 concepts) in which we systematically assessed the unique contribution (i.e., error variance reduction) of different variables to between-unit variation. First, we constructed a null model in which we quantified the variance overall, in each of the concepts. Then we controlled for the contribution of individual level variables (Model 1). In Model 2, we assessed the contribution of practice specialty (medical, surgical, critical care) to variation since it was central to construction of the sampling frame for the study. Finally, we assessed the contribution of additional unit level variables (Model 3).

Results

The null model (unadjusted baseline HLM model) established that there was significant variation between units in each of the 10 ACT concepts (i.e., discrimination between units). When we controlled for individual characteristics, significant variation in the 10 concepts remained. Assessment of the contribution of specialty to between-unit variation enabled us to explain more variance (1.19% to 16.73%) in 6 of the 10 ACT concepts. Finally, when we assessed the unique contribution of the unit level variables available to us, we were able to explain additional variance (15.91% to 73.25%) in 7 of the 10 ACT concepts.

Conclusion

The findings reported here represent the third published argument for validity of the ACT and adds to the evidence supporting its use to discriminate patient care units by all 10 contextual factors. We found evidence of relationships between a variety of individual and unit-level variables that explained much of this between-unit variation for each of the 10 ACT concepts. Future research will include examination of the relationships between the ACT's contextual factors and research utilization by nurses and ultimately the relationships between context, research utilization, and outcomes for patients.

Peer Review reports

Background

Implementation science is the investigation of methods, interventions, and variables that shape the use of research findings in practice, i.e., research utilization. Research demonstrates that contextual factors, i.e., the work setting, consistently moderate strategies to move research into clinical practice [1–3]. Therefore, understanding contextual factors is important to advancing the science of research utilization [4–7]. However, investigation is needed to understand what factors influence context and how context in turn shapes the use of research findings in practice. A better understanding of both of these processes will in turn inform the development and evaluation of interventions to increase research use by healthcare providers, the goal of which is improved patient and organizational (system) outcomes [8, 9]. Integral to this goal is the ability to assess and quantify context [10, 11]. The Alberta Context Tool (ACT) was developed to meet this goal.

The Alberta Context Tool (ACT)

The ACT is a parsimonious survey designed to measure organizational context in complex healthcare settings. It is administered at the level of the individual healthcare provider to elicit their perception of context at the patient care unit and/or organizational (hospital) level, depending on the context of care delivery. For nurses, this level is predominantly at the patient care unit.

Three principles guided the development of the ACT: (1) substantive theory, (2) brevity (ability to complete the instrument in 10 minutes or less), and (3) modifiability (focus on researchable elements of context which are amendable to change). We used the Promoting Action on Research Implementation in Health Services (PARiHS) framework [12] to conceptualize organizational context. Where the framework did not provide direction, we operationalized concepts from related literature (e.g., [13–16]). The PARiHS framework has three core elements - evidence, facilitation and context - which are considered essential to the successful implementation of research into practice [10, 12, 17]. In this framework, context is understood to be the environment or setting where research is to be implemented, and is proposed to have three discrete dimensions: culture, leadership and evaluation [12]. Culture is defined as "the forces at work, which give the physical environment a character and feel" [17] (p.97). Leadership is defined as the "nature of human relationships" [17] (p.98). Effective leadership, in this framework, is conceptualized to give rise to clear roles, effective teamwork and organizational structures, and the involvement of organizational members in decision making and learning. Evaluation, in the PARiHS framework, refers to feedback mechanisms (individual and system level), sources, and/or methods for evaluation [12].

The ACT survey consists of a series of items representing 8 dimensions that are comprised of 10 contextual concepts: (1) leadership, (2) culture, (3) evaluation, (4) social capital, (5) structural and electronic resources, (6) formal interactions, (7) informal interactions, (8) organizational slack - staffing, (9) organizational slack - space, and (10) organizational slack - time. Definitions and sample items of the eight context dimensions are listed in Table 1. The survey exists in three versions (adult care, pediatric care, and long-term care), each with multiple forms (nurses, allied healthcare providers, practice specialists, physicians, and managers). The pediatric nurse version, reported in this paper, consists of 56 items and underwent initial assessment for reliability and validity using data from a national, multi-site study with pediatric nurse professionals [18]. In that report, a principal components analysis (PCA) indicating a 13-factor solution (accounting for 59.26% of the variance in 'organizational context') was reported. Bivariate associations between research utilization levels and the majority of ACT factors as defined by the PCA were statistically significant at the 5% level. Each ACT factor also showed a trend of increasing mean scores ranging from the lowest level to the highest level of research use, further supporting construct validity. The instrument also demonstrated adequate internal consistency reliability with Cronbach's alpha coefficients ranging from a low of 0.54 to a high of 0.91 for the 13 factors [18].

Table 1 Dimensions of the ACT

In a subsequent validity assessment of the ACT [19], completed on responses obtained from healthcare aides (i.e., unregulated nursing care providers) in residential long-term care settings (i.e., nursing homes), we assessed advanced aspects of validity using the Standards for Educational and Psychological Testing (the Standards) validation framework, considered best practice in psychometrics [20]. The Standards identifies four sources of validity evidence, all of which contribute to construct validity. The four sources are: content evidence (the extent to which the items represent the content domain of the concept), response processes evidence (how respondents interpret, process, and elaborate upon item content and whether this is in accordance with the concept being measured), internal structure evidence (relationships between the items within a concept), and relations to other variables evidence (relationships between the concept of interest and external variables with which it is expected and not expected to be related) [20]. In the latter validation paper conducted with healthcare aides in nursing homes, we extended our initial validity assessment and examined advanced aspects of internal structure validity evidence (e.g., confirmatory factor analyses) as well as additional relations with other variables validity testing. The overall pattern of the data (assessed in the confirmatory factor analyses) was consistent with the hypothesized structure of the ACT. Additionally, eight of the ten ACT concepts were related, at statistically significant levels, to instrumental research utilization, supporting the construct validity of the ACT [19].

Patient Care Units as Microsystems

The microsystems literature emphasizes the importance of directing system improvement strategies at the level of clinical (patient care) units. Its proponents argue that these units where care delivery occurs are the essential building blocks or functional units of the organization [21–25]. The clinical unit represents a complex and dynamic system, characterized by interaction between various elements or features (such as leadership, culture, personnel and information) in the process of care delivery [21]. The term 'unit' implies a discrete entity, the margins of which, typically, are defined by geographic limits and the practice specialty [26]. According to the microsystems literature however, "the clinical unit has a semipermeable boundary that mediates relationships with patients and with many support services and external microsystems" [21] (p. 476).

Organizations or macrosystems are comprised of mesosystems such as programs and centers, which, in turn, consist of these connected and interrelated microsystems or units. Nursing care tends to be organized at the level of the clinical unit [4]. Thus, individual patients receive care in clinical units (microsystems) that are embedded within departments, services or programs, which are integrated to form healthcare organizations [27]. Targeting improvement strategies at the level of the functional unit, therefore, has the potential to transform healthcare systems and the patient care experience [21]. Research examining clinical microsystems indicates that high performing units are associated with better patient outcomes [21].

The microsystems literature acknowledges that the effectiveness of healthcare providers is, in part, mediated by the context or environment in which they work [22]. Thus, knowledge of unit context is essential to the development of interventions to optimize care. The microsystems approach aims to understand the context of care delivery, to design systems that enable and support healthcare providers to deliver care consistent with best practice (research) and, ultimately to ensure that patients receive safe, high quality care [22]. The work of Sales et al. [26] reinforces the importance of studying units as individual entities. They found that because of heterogeneity between units, aggregation of nurse data above the level of unit produced biased results and poor estimates of associations with quality measures. This highlights the importance of determining unit-level estimates and identifying variation between microsystems.

The purpose of this paper is to report evidence to further a validity argument for the ACT (which measures context) when used in pediatric settings with professional nurses to capture unit-level context scores. Specifically, we (1) examined the extent to which the 10 ACT concepts discriminate between patient care units, and (2) identified variables that contribute to explaining the between-unit level variation in each of the 10 concepts. While assessment of between-unit discrimination and variance is not a traditional form of validity testing, it is essential to understanding the construct validity of instruments like the ACT that collect data at the individual (respondent) level with the purpose of aggregating those responses to obtain higher (e.g., unit) level estimates.

Methods

Design, Sample, and Data Collection

We used a cross-sectional survey design. Thirty-two patient care units in eight pediatric hospitals across Canada provided the sampling pool for the ACT's administration. The 32 units were distributed between medical units, surgical units and critical care units (neonatal and pediatric intensive care). Five healthcare professional groups were eligible to participate: (1) nurses, (2) physicians, (3) allied healthcare professionals, (4) clinical specialists (e.g., educators), and (5) managers. Inclusion and exclusion criteria for the professional subgroups are presented in Additional File 1. For psychometric testing reasons, we wanted a homogeneous sample and therefore conducted the analysis reported here on the largest group of respondents - nurses (which accounted for 67% of the total sample). Data were collected using an online survey and compiled in a centralized database at the core site for the study. Eligible participants were provided with a survey package containing a letter introducing the study, and a business card providing a Uniform Resource Locator (URL) and unique password to access the survey online.

Ethical approvals for this study were obtained from the Health Research Ethics Boards of the appropriate universities, as well as, the hospital ethics review boards (where applicable) for all hospitals participating in the study.

Measures

The analyses reported here use data from two data collection instruments: (1) the Translating Research on Pain in Children (TROPIC) Unit Profile Form, and (2) the TROPIC Survey (in which the ACT was embedded), both developed specifically by the research team for this study. The TROPIC Unit Profile Form consists of a series of questions about the structural and human resources available on each unit. Examples of items include: average length of patient stay and the number of nurses working on the unit. A research nurse at each site completed the form electronically; a training session preceded data collection. All data were then compiled together at a centralized data collection centre, at the core site for the study. The TROPIC survey was used to collect provider (staff)-level data. The survey is composed of a suite of survey instruments designed to measure: (1) organizational context, (2) research utilization, (3) staff outcomes (e.g., health status, job satisfaction), and (4) select other individual and organizational factors believed to influence research utilization and staff outcomes. The core of the TROPIC Survey is the ACT. Development of the ACT and the results of its initial psychometric assessment are summarized in the background section of this paper, with further details published in an earlier issue of this journal [18].

Study Variables

Dependent variables

The dependent variables examined in this study were the 10 contextual concepts of the ACT (See Table 1). To obtain one score for all items within a concept, the individual items within each concept were averaged (culture, leadership, evaluation, social capital, organizational slack-staffing, organizational slack-time, organizational slack-space) or recoded as existing or not existing and then counted or summed (informal interactions, formal interactions, structural and electronic resources).

Independent variables

The independent variables included in our analyses are listed in Table 2. The research team selected these variables from those available on the TROPIC Unit Profile Form and the TROPIC survey based on current knowledge represented in the (organizational) context in healthcare literature. The independent variables were verified in a series of team meetings as being either at the individual-level (Level 1) or at the unit-level (Level 2).

Table 2 Descriptive Statistics for Individual and Unit-Level Variables by Specialty

Analytic Approach

Reliability and validity of aggregated data at the unit level

Aggregation of individual-level data to a higher (e.g., unit) level is an important methodological issue that has received minimal attention in health services research. While direct measurement of unit-level concepts (e.g., culture) is preferable, it is most often not possible. Therefore, in order to include unit-level estimates of these concepts in our statistical models, we need to obtain data on them from individuals and then aggregate these data to the higher (unit) level. One concern with aggregation is that as data are aggregated, less information will be carried-up to the higher level than is optimal. Therefore, the first step in our analysis was to examine the reliability and validity of all independent variables aggregated to the unit-level. We calculated four standard empirical aggregation indices for this assessment: (1) intraclass correlation 1, ICC(1); (2) intraclass correlation 2, ICC(2); (3) eta-squared, η2; and (4) omega-squared, ω2. One-way analysis of variance (ANOVA) was performed on each variable using the unit as the group variable. The source table from the one-way ANOVA was used to calculate the four standard aggregation indices.

ICC(1) is a measure of individual score variability about the subgroup mean. ICC(1) values theoretically can range from 0 to 1, with values of 0 indicating no perceptual agreement and values of 1 indicating perfect perceptual agreement among members within the same group. Therefore, values greater than 0 (0.10 is the accepted standard) indicate a degree of coherence among individuals about the mean values within each group (i.e., unit) [28]. James [29] examined ICC(1) values reported in applied psychological research studies to justify some degree of perceptual agreement among group members; values ranged from 0 to 0.5, with a median of 0.12. Others have reported similar values. For example, Bliese [30] and Vogus and Sutcliffe [31] reported that ICC(1) values in applied research typically fall between 0.05 to 0.20 and 0.5 to 0.30, respectively. ICC(2) is a measure of stability of aggregated data at the group level; values exceeding 0.60 justify aggregation [28]. η2 and ω2 are measures of validity, also known as measures of 'effect size' in ANOVA. An effect size is a measure of the strength of the relationship between two variables and thus, illustrates the magnitude of the relationship. η2 denotes the proportion of variance in the individual variable (in each derived ACT concept) accounted for by group membership (e.g., by belonging to a specific nursing unit) [32]. This value is equivalent to the R-squared value obtained from a regression model, and where group sizes are large, to ICC(1) [30]. ω2 measures the relative strength of aggregated data as an independent variable. It is also an estimate of the amount of variance in the dependent variable (e.g., in each derived ACT concept) accounted for by the independent variable (i.e., by group membership - belonging to a specific nursing unit) [33]. Larger values of η2 and ω2 indicate stronger effect sizes and relationships between variables. As a result, larger values of η2 and ω2 also indicate stronger 'relations to other variables' validity evidence (as described in the Standards validation framework) and thus, contribute to overall construct validity. Details on the methods for calculating each of these standard aggregation indices are located in our previous work [4, 18, 34, 35].

There are multiple methods for calculating intraclass correlations (ICC). The two most widespread methods are from: (1) random coefficient (multi-level) models, calculated as ICC = unit-level variance/(unit-level variance + individual level variance), and (2) one-way random-effects ANOVA model, calculated as ICC(1) = (BMS - WMS)/(BMS + [K-1] WMS), where BMS = between mean square, WMS = within mean square, and K = the number of participants per group. At this stage of our analyses (which is preliminary to conducting the multi-level modeling) we were seeking statistical support for aggregating some individual variables to the unit-level before entering them into the models. Therefore, we chose to calculate ICC using the latter formula (from one-way random-effects ANOVA model). ICC using this model is referred to as ICC(1) [29, 36, 37], or ICC(1,1) [38]. The two methods of calculating ICC will produce, similar, but not identical, estimates (See Additional File 2). However, by running a one-way random-effects ANOVA model at this stage of our analysis, we were also able to calculate the remaining standard aggregation statistics (ICC(2), η2, and ω2 described previously) in addition to the ICC(1). This allowed us to obtain a more thorough picture of the reliability and validity of our variables when aggregated to the unit-level.

Multi-level analysis

The data collected for this study had a natural hierarchical (or clustered) structure, that is, nurse respondents were nested within patient care units, which were nested within pediatric hospitals. Therefore, our main analysis consisted of a series of multilevel models. The multilevel analyses were conducted using two levels. Level 1 had individual (nurse) variables and Level 2 had unit-level variables. We were limited to two levels by sample size (that is, we did not have sufficient hospitals at the third level, n = 8 hospitals). We used hierarchical linear modeling (HLM) [39] to fit a series of multilevel models capable of quantifying the within-unit (Level 1) and between-unit (Level 2) variation among the 10 contextual concepts in the ACT. A detailed description of the application of two-level multilevel models in nursing organizational research is described elsewhere [40]. The modeling was done using SAS 9.2, MLwiN 2.12, and HLM 6.06.

Individual-level variables

Six individual-level variables were examined and controlled for in the analysis. They were: (1) education, (2) employment status, (3) age, (4) adequate orientation, (5) job satisfaction, and (6) burnout-emotional exhaustion. These factors were conceptualized as individual variables and analysed at Level 1. Each variable (with the exception of burnout-emotional exhaustion) was collected using a single item on the TROPIC survey. Burnout-emotional exhaustion is one of three subscales on the Maslach Burnout Inventory [41], which was embedded in the TROPIC Survey. The emotional exhaustion subscale consists of three items scored on a 7-point Likert-type scale (0-6); a mean of the three items derives an overall score. Higher scores indicate higher levels of burnout.

Unit-level variables

Eight unit-level variables were examined and controlled for in the analysis. They were: (1) burnout-cynicism, (2) burnout-efficacy, (3) experience (length of time) on the unit, (4) support for innovative ideas, (5) the proportion of nurses possessing a baccalaureate degree or higher, (6) language of survey completion (English or French), (7) practice specialty (medicine, surgery, critical care), and (8) the number of beds in the unit.

Burnout-cynicism and burnout-efficacy are the remaining two subscales of the Maslach Burnout Inventory [41]. Like the emotional exhaustion subscale discussed above, the cynicism and efficacy subscales also consist of three items, each scored on a 7-point Likert-type scale. An overall score is derived for each subscale by taking a mean of the three items; higher scores on cynicism and lower scores on efficacy equate with higher burnout. These two burnout subscales were conceptualized as unit-level variables on the basis of a standard aggregation statistic, ICC(1). ICC(1) values for both subscales exceeded 0.1 (values were 0.201 and 0.297 for the cynicism and efficacy subscales respectively, see Table 3) indicating a degree of coherence among the nurses on these subscales within each unit. This same degree of coherence was not seen in the emotional exhaustion subscale (ICC(1) = 0.032), and it was therefore entered as an individual level variable.

Table 3 Reliability of Data Aggregated Unit Level Variables

Experience on the unit, support for innovative ideas, and proportion of nurses possessing a baccalaureate degree or higher were collected using single items on the TROPIC survey. The remaining unit-level variables (specialty, language, and number of beds) were obtained as a result of the sampling strategy (in the case of specialty) or the TROPIC Unit Profile Form.

Modeling process

A series of models was constructed for each of the 10 ACT concepts, resulting in 40 models for our analysis. First, an unconditional (null) model was run for each ACT concept (n = 10 models). The null model fits an overall constant to the data. It is equivalent to performing a random-effect analysis of variance that allows us to calculate how much of the variation in the 10 ACT (contextual) concepts lies between individuals and between units. This was then followed by a series of three models for each ACT concept (n = 30 models) as follows:

  1. (1)

    Model 1 - a two-level model that fits the constant plus the individual-level variables selected for inclusion. As a result, Model 1 explains the proportion of the variance in each of the 10 contextual variables that is between individuals;

  2. (2)

    Model 2 - a two-level model using individual variables and practice specialty (medical, surgical, critical care);

  3. (3)

    Model 3 - a two-level model using individual and unit-level variables (including practice specialty). While practice specialty is a unit-level variable, we were interested in examining its unique contribution to variation because it was central to construction of the sampling frame for the study. Therefore we constructed Model 2 in addition to Model 3 to disentangle this contribution.

We started the modeling process with the construction of an unconditional or null model without any predictors specified at the individual or unit levels for each ACT concept. This allowed us to apportion the variance at the two levels. The null model was defined as:

Level 1:

Yij = β0j + εij , εij ~ N(0, σ2) [Equation 1]

Level 2:

β0j = ψ00 + ϑ0j, ϑ0j ~ N(0, τ00) [Equation 2]

The combined null model is defined as:

Yij = ψ00 + ϑ0j + εij [Equation 3]

Where:

Yij = the value of the ACT (contextual) concept for the i th nurse in the j th unit

ψ00 = fixed term and represents the grand (or overall) mean score of the ACT (contextual) concept

Ï‘0j = random term and represents unit offset effects from the grand mean or the discrepancy between overall mean and j th unit mean (unique contribution of each patient care unit)

εij = random term and represents individual offset effects from the unit mean or individual's group mean (unique contribution of each individual i in patient care unit j)

Following examination of the 10 null models, an individual-level analysis was performed on each ACT contextual concept (Model 1 run 10 times). This allowed us to examine the predictive relationships between the individual-level independent variables and each ACT concept. Model 1 was defined as follows.

Model 1 (Level 1 and Level 2 Combined):

Yij = ψ00 + ϑ0j + β1 (employment status)ij

+ β2 (education)ij + β3 (age)ij

+ β4 (burnout-emotional exhaustion)ij

+ β5 (adequate orientation)ij

+ β6 (job satisfaction)ij + εij [Equation 4]

Where:

Yij = the value of the ACT (contextual) concept for the i th nurse in the j th unit

ψ00 = the overall average for the ACT (contextual) concept

β1, β2, β3, β4, β5, β6 = coefficients of the individual variables at Level 1

εij = the unique contribution of each individual i in patient care unit j

The errors, εij, are assumed independently, normally distributed with constant variance σ2. Since the control variables are centered on the sample means, the β0j is the mean achievement in a patient care unit after adjusting for the effect of employment status, education, age, burnout-emotional exhaustion, adequate orientation, and job satisfaction.

Model 1 was followed by the construction of Models 2 and 3, each of which were two-level models; Model 2 used individual variables and specialty as independent variables, while Model 3 used individual and unit-level variables (including specialty) as independent variables. Models 2 and 3 were defined as follows.

Model 2 (Level 1 and Level 2 Combined):

Yij = ψ00 + ψ1 (specialty )j + β1 (employment status)ij

+ β2 (education)ij + β3 (age)ij

+ β4 (burnout-emotional exhaustion)ij

+ β5 (adequate orientation)ij

+ β6 (job satisfaction)ij + εij + ϑj [Equation 5]

Model 3 (Level 1 and Level 2 Combined):

Yij = ψ00 + ψ1 (specialty)j

+ ψ2 (mean burnout-cynicism)j

+ψ3 (mean burnout-efficacy)j

+ ψ4 (mean years on unit)j

+ ψ5 (French-English status)j

+ ψ6 (mean number of unit beds)j

+ ψ7(mean support for innovative ideas)j

+ ψ8 (% baccalaureate or higher)j

+ β1 (employment status)ij + β2 (education)ij

+ β3 (age)ij + β4 (burnout-emotional exhaustion)ij

+ β5 (adequate orientation)ij

+ β6 (job satisfaction)ij + εij + ϑj [Equation 6]

Where:

Yij = the value of the ACT (contextual) concept for the i th nurse in the j th unit

ψ00 = the overall average for the ACT (contextual) concept

ψ1, ψ2, ψ3, ψ4, ψ5, ψ6 , ψ7, ψ8 = the regression coefficients for the effect of unit level factors on the adjusted ACT (contextual) concept

β1, β2, β3, β4, β5, β6 = coefficients of the individual variables at Level 1

εij = the unique contribution of each individual i in patient care unit j

Ï‘j = the unit level error term or the unique contribution of each unit to the unit level variation, Ï„. The Ï‘j's are assumed to be normally distributed with variance, Ï„.

For all models, we assumed a random effect for the intercept and fixed effects for all of the Level 1 and Level 2 predictors. The variation between the 32 patient care units, or intraclass correlation (ICC), is the proportion of unconditional variance in each of the 10 dependent (contextual) variables attributable to the unit (i.e., before controlling for any individual background variables). ICC was calculated using the formula: ICC = τ0/(τ0 + σ2) which is equivalent to the proportion of between-unit variance compared to the total variance in each of the 10 ACT concepts; where τ0 is the estimated unit-level error variance for the null model. The ICC measure was compared and assessed to determine whether unit-level variance was significantly different from 0. The relative reduction in unit-level error variance with respect to the null model (i.e., explained variance or R 2) was subsequently assessed. For two-level multilevel models, the amount of variance explained between four models via the R 2 at Level 2 (the unit-level) can be calculated as R2 = 1 - (τp/τ0) where τp is the estimated unit-level error variance for the model after p additional variables were added to the null model.

Results

Sample Characteristics

We analysed data from 844 professional nurses in 32 patient care units across 8 Canadian pediatric hospitals. The percentage distribution by practice specialty in the sample was balanced across the 8 hospitals: medicine (n = 14, 43.8%), surgery (n = 8, 25%), and critical care (n = 10, 31.2%). The number of occupied beds ranged from 4 to 46 with a mean of 20.04 beds (SD = 10.07 beds). This number was consistent across practice specialties with a mean of 20.76 beds (SD = 10.04), 21.68 beds (SD = 4.86), and 17.74 beds (SD = 13.28), for medicine, surgery, and critical care units respectively. The average length of patient stay was similar in medicine (6.41 days, SD = 2.99) and surgery units (4.34 days, SD = 1.11) and slightly higher (9.47 days, SD = 8.23) in critical care units. Descriptive statistics for each of the independent variables entered into the multilevel analysis are presented in Table 2. The aggregation analyses for the independent unit-level variables are presented in Table 3. Both Tables 2 and 3 report findings using a random effects ANOVA model and are descriptive and preliminary in nature to our main analysis, in which we used a series of multi-level (HLM) models. Variability of each of the dependent (ACT) variables is presented in Table 4 and findings from the multilevel analysis are in Tables 5, 6 and 7.

Table 4 Mean and Standard Deviation Scores on ACT Concepts by Unit and Specialty
Table 5 Examination of Unit-Level Variation for the ACT Variables
Table 6 Contribution of Individual, Specialty, and Unit-Level Variables using R 2
Table 7 Significant Explanatory Variables on each ACT Concept

Reliability of Aggregated Unit-Level Variables

The statistics to assess the reliability of aggregated values supported aggregating the data on these variables to the level of the patient care unit (Table 3). Statistically significant (p < 0.05) F statistics and/or ICC(2) values greater than 0.60 indicate greater reliability and justification for aggregating the variables to the unit-level. The ICC(1) values ranged from 0.0918 to 0.2968, indicating perceptual agreement among nurses about the mean values for the variables within each unit. That is, the nurses' perceptions about their own unit were similar. The relative effect sizes for both η2 and ω2 values were moderate, suggesting that, as we aggregated data, our ability to assign the same meaning for a variable at the unit-level that we had at the individual-level decreased.

Variability in the Dependent Variables

To assess variation in the dependent variables (the 10 ACT contextual concepts) examined in this study, we: (1) examined the mean scores for each concept by unit and by specialty (Table 2), and (2) constructed a series of caterpillar plots (Figure 1) examining the 10 dependent variables across the full sample of 32 patient care units. There were statistically significant differences between mean scores on all 10 dependent variables by unit (ANOVA, p < 0.001, Table 2) and for 7 of the 10 dependent variables (exceptions were informal interactions, social capital, and structural and electronic resources) by practice specialty (ANOVA, p < 0.05, Table 2). The caterpillar plots (Figure 1) were generated using the null hierarchical linear models and 95% confidence intervals; the MLwiN 2.12 program was used to generate these plots. The ascending order of mean scores seen in the caterpillar plots indicate that some units departed significantly from the overall level of each of the 10 ACT concepts across the full sample. These findings demonstrate adequate variability in the dependent variables.

Figure 1
figure 1

Caterpillar Plot for each ACT Variable (Model 1, N = 32 Units).

Results of the Multi-Level (HLM) Analysis

Null model

The components of separate variances at the two levels (individual and unit) varied by the dependent ACT concept variable: Level 1 individual variance ranged from 0.2031 to 3.2173 (p < 0.001) and Level 2 unit variance ranged from 0.0171 to 0.3490 (p < 0.001). Each was statistically significant at the 0.01 level. These variance components were then used to estimate the ICC at the unit level. This proportion varied according to the dependent variable (ACT concept) as follows: leadership (0.2032), culture (0.0928), evaluation (0.1770), social capital (0.0777), organizational slack-staffing (0.2395), organizational slack-space (0.2634), organizational slack-time (0.1168), formal interactions (0.0161), informal interactions (0.1155), and structural and electronic resources (0.0979). Each was statistically significant at the 0.01 level (Table 5).

Analysis of individual predictors (Model 1)

Findings revealed that the contribution of individual-level variables in terms of relative error variance reduction when they were added into each null model (i.e., for each of the 10 contextual variables) varied significantly according to the ACT concept examined, ranging from a low of 0.0111 (evaluation) to a high of 0.9169 (structural and electronic resources) (Table 5 Column 4). The proportions of explained variance (R 2) for all 10 ACT concepts across the three models are presented in Table 5.

Analysis of individual and specialty predictors (Model 2)

We had hypothesized that part of the variance in the 10 ACT concepts should reflect practice specialty (medicine, surgery, and critical care). In Model 2, we assessed for the effect of unit specialty on between-unit variation. Practice specialty accounted for, from 0% (for four contextual variables: social capital, organizational slack-staff, informal interactions, and structural and electronic interactions) to almost 17% (for two contextual variables: evaluation [0.1662] and formal interactions [0.1673]) of the variance (Table 6 column 3: Model 1 vs. Model 2). This proportion of explained variance is after controlling for individual-level variables but prior to controlling for other unit-level variables.

Analysis of individual and specialty and other unit predictors (Model 3)

In Model 3, seven additional unit-level variables were added to the model (Table 2). The unique contribution of these unit-level variables to explaining variance in each of the 10 ACT concepts (i.e., after controlling for individual-level variables and practice specialty) is summarized in Table 6 (see column 5: Model 2 vs. Model 3).

The test for unit-level variance was significant for 7 of the 10 ACT concepts. The seven unit-level variables in Model 3 combined accounted for between 0.1527 and 0.7325 of the variations as follows: leadership (0.2311), evaluation (0.7325), organizational slack-staffing (0.1527), organizational slack-space (0.4755), organizational slack-time (0.4355), formal interactions (0.1591), and structural and electronic resources (0.4590). Model 3 results also indicate that significant residual (unexplained) variations remained after controlling for individual and unit-level variables entered into our models. For example, less than 60% of the variance was explained in the following five contextual variables: (1) leadership (0.3863 explained variance), (2) organizational slack-staffing (0.3354 explained variance), (3) organizational slack-space (0.4831 explained variance), (4) organizational slack-time (0.5430 explained variance), and (5) formal interactions (0.4435 explained variance) (Table 5 column 6: Model 3).

Finally, we assessed which unit-level variables were associated, at statistically significant levels, with each of the 10 ACT concepts in our multilevel analysis (Table 7). 'Support for innovative ideas' was the only unit-level variable that showed a consistent, statistically significant association across the majority (n = 8 of 10) of ACT concepts; the two exceptions were organizational slack-staffing and organizational slack - time. Specialty showed an influence that was statistically significant on two of the contextual variables: evaluation and formal interactions. When compared to critical care, both surgical (0.66, p < 0.001), and medical (-0.44, p < 0.001) units had lower scores on evaluation that were statistically significant. Surgical units (-0.54, p = 0.011) had statistically significant lower scores on formal interactions compared to both medicine and critical care units. Other unit-level variables associated, at statistically significant levels, with the contextual variables in our multilevel analysis included:

  • burnout-cynicism (with culture)

  • years on the unit (experience) (with culture and organizational slack-space)

  • unit size (with evaluation and social capital)

  • percentage of baccalaureate or higher prepared nurses (with evaluation)

Discussion

The findings reported here add to the validity evidence supporting the use of the ACT to discriminate patient care units by all 10 ACT contextual factors. In addition, we found evidence of relationships between a variety of individual- and unit-level variables that explained much of this between-unit variation for each of the 10 ACT concepts.

Aggregation of the ACT Concepts

The aggregation statistics performed in this study support the argument that ACT responses obtained from pediatric nurses (in our study sample) can be aggregated reliably and validly to obtain unit-level estimates of the dimensions of context represented in the ACT. This is consistent with our findings in the context of healthcare aides' scores in long-term care settings [19]. We ran the same aggregation statistics on allied healthcare professionals (e.g., rehabilitation therapists) (n = 209, mean = 7 responses/unit) who also competed the ACT survey in the study reported in this paper. These aggregation statistics did not support aggregation at the unit level. This is consistent with allied healthcare professionals' work practices being more aligned with programs (which consist of several units) rather than a single unit (where most nurses tend to work). The remaining respondent groups from the study were small in number (physicians n = 86, mean = 3 responses/unit; practice specialists n = 55, mean = 2 responses/unit; and managers n = 35, mean = 1 response/unit) and therefore we did not perform unit-level aggregation statistics on their responses. We suspect, however, that similar to allied healthcare professionals, their responses would align more with programs or possibly facilities (depending on their context of care delivery) rather than the unit.

Discrimination Between Patient Care Units

Our first objective was to examine the extent to which the 10 ACT concepts discriminate between patient care units. The majority of patient care is delivered within microsystems (i.e., within patient care units). The microsystems literature, according to Disch [22], highlights the importance of focusing on the unit, rather than the individual, as the unit of analysis. As such, work in this field has concentrated on understanding the context of care delivery and the optimization of systems to enable health professionals to deliver high quality care. Research evidence indicates that development of best practice within microsystems has the potential to improve patient outcomes [21]. Contextual variation at the unit level in healthcare using validated instruments has been largely unexplored. However, a recent study of public health and social services settings in Finland examined differentiation in organizational culture and climate across work units [42]. Individual-level data were collected using the Organizational Social Context (OSC) instrument [43] to measure work unit culture and climate. The investigators concluded that different organizational climates and cultures exist within work units and at organizational levels. Given the importance of the patient care unit as an essential functional component of an organization (one at which quality of care and patient safety are realized) [21, 22, 44], the capacity of the ACT to discriminate between such units is a highly desirable feature of the instrument.

To assess variation in the 10 ACT concepts as dependent variables, we assessed the mean scores for each concept by unit and by practice specialty. The statistically significant differences, between mean scores on all 10 concepts by unit and for 9 of the 10 concepts by practice specialty (Table 4) and the ascending order of mean scores in the caterpillar plots (Figure 1), show that some units departed significantly from the overall level of each of the 10 concepts across the sample. These findings suggest adequate variability between units on the ACT concepts in this sample. Such findings, therefore, provide evidence for the capacity of the ACT to discriminate between units. This attribute of the instrument is vital to distinguishing and measuring contextual dimensions of the patient care functional unit that are important to optimizing quality of care. This instrument, therefore, shows promise in offering a measure of the status of the microsystem and highlighting areas in which modifications are required.

A recent comparative analysis of measurement tools for organizational context demonstrates some overlap with extant context tools and the 10 dependent variables in ACT. In this analysis, French and colleagues [45] identified 18 tools; the ACT was not included due to the date restrictions of their study. Seven common themes or attributes across the 18 tools were identified: organizational learning culture, vision, leadership, knowledge need, acquisition of new knowledge, knowledge sharing, and knowledge use. Four of these themes are conceptually similar to the ACT concepts, specifically organizational learning culture (with ACT culture), leadership (with ACT leadership), knowledge sharing (with several ACT concepts including formal interactions, informal interactions, organizational slack-time) and knowledge use (with ACT formal interactions and ACT informal interactions). Eleven of the eighteen tools identified by French and colleagues [45] contained elements of these four themes. The majority of these tools (8 of 11) were developed in the field of organizational theory generally, and were not specific to healthcare. Three tools had some conceptual similarity to ACT concepts: (1) the ABC Survey [46] (attributes assessed: knowledge sharing, knowledge use); (2) KEYS Knowledge Exchange Yields Success Questionnaire [47] (attribute assessed: leadership); and, (3) the Research and Development Index [48] (attribute assessed: knowledge use). Two of these three tools (ABC Survey and KEYS Knowledge Exchange Yields Success Questionnaire) do not have published reliability and validity assessments and the third tool (Research and Development Index) has only been used at an organizational (NHS Trust) level, not at a unit level.

Discrimination Between Specialties

Previous multivariate research by Mallidou et al. [49] demonstrated the existence of nurse specialty subcultures. In that research, four nursing specialty cultures were assessed: (1) medical, (2) surgical, (3) intensive care, and (4) emergency care. Mallidou and colleagues demonstrated that nurse and patient outcomes (e.g., job satisfaction, quality of care and adverse patient occurrence) in acute care hospitals were shaped by nursing specialty subcultures. In our research, while practice specialty contributed independently to the explained variance, it is less clear whether our findings support its inclusion as a sampling criterion. For instance, in four of the 10 ACT concepts (social capital, organizational slack-staffing, informal interactions, and structural and electronic interactions) practice specialty accounted for 0% of the variance; while in two concepts, it accounted for almost 17% of the variance (evaluation and formal interactions). Specialty only showed a statistically significant association with two of the contextual concepts - evaluation and formal interactions, with critical care respondents scoring higher in both cases.

Upon further reflection of our findings in relation to Mallidou et al.'s [49] study, a conceptual issue and an inter-related unit of analysis issue become apparent; that is, what is the appropriate scope of a specialty? Said another way, it could be argued that in the case of this research, only one practice specialty was explored, that is, pediatrics - and further categorizing of nurses into medical, surgical, and critical care is more accurately a sub-specialty classification. That said, the scope and extent of practice specialty and potentially sub-specialty sampling criteria demand careful consideration of how nurses ascribe membership to particular practice specialties of nursing, and as a result, this methodological decision must be thoughtfully weighed by investigators.

Support for innovative ideas was the only unit-level variable that showed a consistent and statistically significant association with the majority (8 of 10) of ACT context variables; the two exceptions were two of the organizational slack concepts (staffing and time). Underpinning these findings is an assumption that support for innovativeness is a collectively held value and that support for innovation behaves in a manner over and above the additive behavior of the individual members in the unit. These findings parallel some of the ideas originally put forth by Rogers [50] who suggested that innovativeness is related to variables such as leadership, internal organizational structural characteristics and external characteristics of the organization. Several of the ACT concepts map onto Rogers' ideas, for instance, the ACT concept of leadership maps onto leadership, and formal and informal interactions map onto internal organizational structural characteristics. The strong association between support for innovative ideas and eight of the 10 ACT contextual variables suggests the importance of support for innovative ideas in explaining the between-unit variation for the concept, particularly given that individual background and practice specialty factors were controlled for in our models.

In our final model results (Model 3), we can see that significant residual unit variations remain after controlling for the individual and the unit-level variables entered into our models. Less than 60% of the variance was explained in leadership, organizational slack-staffing, organizational slack-space, organizational slack-time, and formal interactions. This suggests that future research is needed to identify other factors that may help explain the residual variation remaining in these contextual variables.

Limitations

We might have explored further Level 1 regression equations that model each of the within- patient care unit regression coefficients as a function of the unit-level factors if the slopes were allowed to vary among the units (i.e., a random-effect models). However, we deemed the sample size per unit (on average 25 nurses) too small to explore cross-level interaction, making it impossible to estimate the variability in such regression coefficients accurately. Therefore all regression coefficients other than the intercept were constrained to be constant within units (i.e., a fixed-effect model).

Conclusion

The findings reported here represent the third published argument for validity of the ACT and add to the evidence supporting its use to discriminate patient care units by all 10 contextual concepts. We further found evidence of relationships between a variety of individual- and unit-level variables that explained much of this between-unit variation for each of the 10 ACT concepts. Future research will include an examination of the relationships between the ACT's contextual factors and research utilization by nurses and ultimately the relationships between context (as measured by the ACT), research utilization, and outcomes for patients.

References

  1. Dopson S, Fitzgerald L, (eds.): Knowledge to Action: Evidence-Based Health Care in Context. 2005, Oxford: Oxford University Press

  2. Scott-Findlay S, Golden-Biddle K: Understanding how organizational culture shapes research use. J Nurs Adm. 2005, 35 (7-8): 359-365.

    PubMed  Google Scholar 

  3. Stetler C: Role of the organization in translating research into evidence-based practice. Outcome Manag. 2003, 7 (3): 97-105.

    Google Scholar 

  4. Estabrooks CA, Scott S, Squires JE, Stevens B, O'Brien-Pallas L, Watt-Watson J, Profetto-McGrath J, McGilton K, Golden-Biddle K, Lander J, et al: Patterns of research utilization on patient care units. Implement Sci. 2008, 3: 31-10.1186/1748-5908-3-31.

    Article  PubMed  PubMed Central  Google Scholar 

  5. Scott SD, Estabrooks CA, Allen M, Pollock C: A context of uncertainty: how context shapes nurses' research utilization behaviors. Qual Health Res. 2008, 18 (3): 347-357. 10.1177/1049732307313354.

    Article  PubMed  Google Scholar 

  6. Titler M: Translation science and context. Research and Theory for Nursing Practice: An International Journal. 2010, 24 (1): 35-55. 10.1891/1541-6577.24.1.35.

    Article  Google Scholar 

  7. Woods N, Magyary D: Translational research: Why nursing's interdisciplinary collaboration is essential. Research and Theory for Nursing Practice: An International Journal. 2010, 24 (1): 9-24. 10.1891/1541-6577.24.1.9.

    Article  Google Scholar 

  8. Redfern S, Christian S: Achieving change in health care practice. J Eval Clin Pract. 2003, 9 (2): 225-238. 10.1046/j.1365-2753.2003.00373.x.

    Article  PubMed  Google Scholar 

  9. Titler MG: Methods in translation science. Worldviews Evid Based Nurs. 2004, Q1: 38-48.

    Article  Google Scholar 

  10. McCormack B, McCarthy G, Wright J, Coffey A: Development and testing of the Context Assessment Index (CAI). Worldviews Evid Based Nurs. 2009, 6 (1): 27-35. 10.1111/j.1741-6787.2008.00130.x.

    Article  PubMed  Google Scholar 

  11. Wallin L: Knowledge translation and implementation research in nursing. Int J Nurs Stud. 2009, 46: 576-587. 10.1016/j.ijnurstu.2008.05.006.

    Article  PubMed  Google Scholar 

  12. Kitson A, Harvey G, McCormack B: Enabling the implementation of evidence based practice: A conceptual framework. Qual Health Care. 1998, 7 (3): 149-158. 10.1136/qshc.7.3.149.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  13. Grol R, Berwick D, Wensing M: On the trail of quality and safety in health care. BMJ. 2008, 336 (7635): 74-76. 10.1136/bmj.39413.486944.AD.

    Article  PubMed  PubMed Central  Google Scholar 

  14. Grol R, Grimshaw J: From best evidence to best practice: Effective implementation of change in patients' care. Lancet. 2003, 362 (9391): 1225-1230. 10.1016/S0140-6736(03)14546-1.

    Article  PubMed  Google Scholar 

  15. Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O: Diffusion of innovation in service organizations: Systematic review and recommendations. Milbank Q. 2004, 82 (4): 581-629. 10.1111/j.0887-378X.2004.00325.x.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Fleuren M, Wiefferink K, Paulussen T: Determinants of innovation within health care organizations: Literature review and Delphi study. Int J Qual Health Care. 2004, 16 (2): 107-123. 10.1093/intqhc/mzh030.

    Article  PubMed  Google Scholar 

  17. McCormack B, Kitson A, Harvey G, Rycroft-Malone J, Titchen A, Seers K: Getting evidence into practice: The meaning of 'context'. J Adv Nurs. 2002, 38 (1): 94-104. 10.1046/j.1365-2648.2002.02150.x.

    Article  PubMed  Google Scholar 

  18. Estabrooks C, Squires J, Cummings G, Birdsell J, Norton P: Development and assessment of the Alberta Context Tool. BMC HSR. 2009, 9: 234.

    Google Scholar 

  19. Estabrooks C, Squires J, Hayduk L, Cummings G, Norton P: Advancing the argument for validity of the Alberta Context Tool with healthcare aides in residential long-term care. BMC MRM. 2011, 11: 107.

    Google Scholar 

  20. American Educational Research Association, American Psychological Association, National Council on Measurement in Education: Standards for Educational and Psychological Testing. 1999, Washington, D.C.: American Educational Research Association

    Google Scholar 

  21. Nelson E, Batalden P, Huber T, Mohr J, Godfrey M, Headrick L, Wasson J: Microsystems in health care: Part 1. Learning from high performing front-line clinical units. J Qual Improv. 2002, 28: 472-493.

    Google Scholar 

  22. Disch J: Clinical Microsystems: The building blocks of patient safety. Creative Nursing. 2006, 3: 13-14.

    Google Scholar 

  23. Godfrey M, Nelson E, Wasson J, Mohr J, Batalden P: Microsystems in healthcare: Part 3. Planning patient-centered services. Joint Commission Journal on Quality and Safety. 2003, 29 (4): 159-170.

    PubMed  Google Scholar 

  24. Goldschmidt K, Gordin P: A model of nursing care Microsystems for a large neonatal intensive care unit. Advances in Neonatal Care. 2006, 6 (2): 81-87. 10.1016/j.adnc.2006.01.003.

    Article  PubMed  Google Scholar 

  25. Wasson J, Godfrey M, Nelson E, Mohr J, Batalden P: Microsystems in healthcare: Part 4. Planning patient centered care. Joint Commission Journal on Quality and Safety. 2003, 29 (5): 227-237.

    PubMed  Google Scholar 

  26. Sales A, Sharp N, Li Y-F, Lowy E, Greiner G, Liu C-F, Alt-White A, Rick C, Sochalski J, Mitchell P, et al: The association between nursing factors and patient mortality in the Veterans Health Administration: The view from the nursing unit level. Medical Care. 2008, 46 (9): 938-945. 10.1097/MLR.0b013e3181791a0a.

    Article  PubMed  Google Scholar 

  27. Nelson E, Godfrey M, Batalden P, Berry S, Bothe A, McKinley K, Melin C, Meuthing S, Moore G, Wasson J, et al: Clinical Microsystems, Part 1. The building blocks of health systems. The Joint Commission Journal on Quality and Patient Safety. 2008, 34 (7): 367-378.

    PubMed  Google Scholar 

  28. Glick WH: Conceptualizing and measuring organizational and psychological climate: Pitfalls in multilevel research. Academy of Management Review. 1985, 10 (3): 601-616.

    Google Scholar 

  29. James LR: Aggregation bias in estimates of perceptual agreement. J Appl Psychol. 1982, 67 (2): 219-229.

    Article  Google Scholar 

  30. Bliese PD: Within-group agreement, non-independence, and reliability: Implications for data aggregation and analysis. Multilevel Theory, Research and Methods in Organizations: Foundations, Extensions and New Directions. Edited by: Klein KJ, Kozlowski SWJ. 2000, San Francisco: Jossey-Bass, 349-381.

    Google Scholar 

  31. Vogus TJ, Sutcliffe KM: The Safety Organizing Scale: Development and validation of a behavioral measure of safety culture in hospital nursing units. Med Care. 2007, 45 (1): 46-54. 10.1097/01.mlr.0000244635.61178.7a.

    Article  PubMed  Google Scholar 

  32. Rosenthal R, Rosnow RL: Essentials of Behavioural Research: Methods and Data Analysis. 1991, New York: McGraw Hill, 2

    Google Scholar 

  33. Keppel G: Design and Analysis: A Researcher's Handbook. 1991, Englewood Cliffs, New Jersey: Prentice-Hall

    Google Scholar 

  34. Estabrooks CA, Midodzi WK, Cummings GG, Ricker KL, Giovannetti P: The impact of hospital nursing characteristics on 30-day mortality. Nurs Res. 2005, 54 (2): 74-84.

    Article  PubMed  Google Scholar 

  35. Estabrooks CA, Midodzi WK, Cummings GG, Wallin L: Predicting research use in nursing organizations: a multilevel analysis. Nurs Res. 2007, 56 (4 Suppl): S7-S23.

    Article  PubMed  Google Scholar 

  36. Bartko J: On various intraclass correlation reliability coefficients. Psychol Bull. 1976, 83: 762-765.

    Article  Google Scholar 

  37. McGraw K, Wong S: Forming inferences about some intraclass correlation coefficients. Psychological Methods. 1996, 1: 30-46.

    Article  Google Scholar 

  38. Shrout PE, Fleiss JL: Intraclass correlations: Uses in assessing rater reliability. Psychol Bull. 1979, 2: 420-428.

    Article  Google Scholar 

  39. Bryk A, Raudenbush S: Hierarchical Liner Models: Applications and Data Analysis Methods. Edited by: 2. 2002, Thousand Oaks, CA: Sage

  40. Adewale AJ, Hayduk L, Estabrooks CA, Cummings GG, Midodzi WK, Derksen L: Understanding hierarchical linear models: Applications in nursing research. Nursing Research. 2007, 56 (4 Supplement 1): S40-S46.

    Article  PubMed  Google Scholar 

  41. Maslach C, Jackson SE, Leiter MP: Maslach Burnout Inventory. Edited by: 3. 1996, Palo Alto, CA: Consulting Psychologists Press

  42. Rostila I, Suominen T, Asikainen P, Green P: Differentiation of organizational climate and culture in public health and social services in Finland. J Pub Health. 2011, 19: 39-47.

    Article  Google Scholar 

  43. Glisson C, Landsverk J, Schoenwald S, Kielleher K, Hoagwook K, Mayberg S, Green P: Assessing the Organizational Social Context (OSC) of Mental Health Services: Implications for Implementation Research and Practice. Administration and Policy in Mental Health. 2008, 35 (1): 98-113. 10.1007/s10488-007-0148-5.

    Article  PubMed  Google Scholar 

  44. Batalden P, Nelson E, Edwards W, Godfrey M, Mohr J: Developing small clinical units to attain peak performance. Joint Commission Journal on Quality and Safety. 2003, 29 (11): 575-585.

    PubMed  Google Scholar 

  45. French B, Thomas L, Baker P, Burton C, Pennington L, Roddham H: What can management theories offer evidence-based practice: A comparative analysis of measurement tools for organizational context. Implement Sci. 2009, 4: 28-10.1186/1748-5908-4-28.

    Article  PubMed  PubMed Central  Google Scholar 

  46. Clarke H: A.B.C. Survey. 1991, Vancouver: Registered Nurses Association of British Columbia

    Google Scholar 

  47. Foundation CHSR: Is research working for you? A self assessment tool and discussion guide for health services management and policy organizations. 2005, Ottawa: Canadian Health Services Research Foundation

    Google Scholar 

  48. Watson B, Clarke C, Swallow V, Forster S: Exploratory factor analysis of the research and development culture index among qualified nurses. Journal of Clinical Nursing. 2005, 14: 1042-1047. 10.1111/j.1365-2702.2005.01214.x.

    Article  PubMed  Google Scholar 

  49. Mallidou A, Cummings G, Esatbrooks C, Giovannetti P: Nurse specialty subcultures and patient outcomes in acute care hospitals: A multiple-group structural equation modelling. Int J Nurs Stud. 2011, 48 (1): 81-93. 10.1016/j.ijnurstu.2010.06.002.

    Article  PubMed  Google Scholar 

  50. Rogers EM: Diffusion of Innovations. Edited by: 4. 1995, New York: The Free Press

Pre-publication history

Download references

Acknowledgements and funding

Acknowledgements: The authors wish to acknowledge additional co-investigators of the CIHR Team in Children's Pain for contributions to this study: Melanie Barwick, Fiona Campbell, Christine Chambers, Janice Cohen, G. Allen Finley, Celeste Johnston, Tricia Kavanagh, Margot Latimer, Shoo Lee, Sylvie Le May, Patrick McGrath, Judith Rashotte, Christina Rosmus, Doris Sawatzky-Dickson, Souraya Sidani, Jennifer Stinson, Robyn Stremler, Anne Synnes, Anna Taddio, Edith Villeneuve, Andrew Willan, Fay Warnock, and Janet Yamada. We also wish to acknowledge Laura Abbott, Project Manager for the CIHR Team in Children's Pain, and Anne-Marie Adachi, Research Manager for Project 2 (described in this manuscript) of the CIHR Team in Children's Pain. The Centre for Computational Biology at The Hospital for Sick Children (Toronto) created, housed, and supported the database.

Funding: Funding for this project was provided by the Canadian Institutes of Health Research (CIHR) (CTP-79854 and MOP-86605). CAE holds a CIHR Canada Research Chair in Knowledge Translation. JES is funded by CIHR Postdoctoral and Bisby Fellowships. SS and GCC are supported by a career scientist awards from the Alberta Heritage Foundation for Medical Research (AHFMR) and CIHR.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Carole A Estabrooks.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

CAE, SS, GGC and BS participated in designing the study and securing its funding. CAE, GGC, SHK, and WKM designed the analytic plan for the analyses presented in this paper. SHK conducted the statistical analysis; CAE, JES, SHK, and WKM interpreted the statistical analysis. CAE, JES, AMH, SS, GGC, SHK and WKM participated in drafting the manuscript. All authors provided critical commentary on the manuscript and approved the final version.

Electronic supplementary material

12913_2011_1792_MOESM1_ESM.DOC

Additional File 1: Inclusion and Exclusion Criteria by Professional Group. A summary of the inclusion and exclusion criteria applied to healthcare professionals in the study. (DOC 54 KB)

12913_2011_1792_MOESM2_ESM.DOC

Additional File 2: Intraclass Correlation Calculation. Compares intraclass correlation calculated using random coefficient (multi-level) model and one-way random-effects ANOVA models. (DOC 48 KB)

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Estabrooks, C.A., Squires, J.E., Hutchinson, A.M. et al. Assessment of variation in the alberta context tool: the contribution of unit level contextual factors and specialty in Canadian pediatric acute care settings. BMC Health Serv Res 11, 251 (2011). https://doi.org/10.1186/1472-6963-11-251

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1472-6963-11-251

Keywords