Skip to main content
  • Research article
  • Open access
  • Published:

Exploratory randomized controlled trial evaluating the impact of a waiting list control design

Abstract

Background

Employing waiting list control designs in psychological and behavioral intervention research may artificially inflate intervention effect estimates. This exploratory randomized controlled trial tested this proposition in a study employing a brief intervention for problem drinkers, one domain of research in which waiting list control designs are used.

Methods

All participants (N = 185) were provided with brief personalized feedback intervention materials after being randomly allocated either to be told that they were in the intervention condition and that this was the intervention or to be told that they were in the waiting list control condition and that they would receive access to the intervention in four weeks with this information provided in the meantime.

Results

A total of 157 participants (85%) were followed-up after 4 weeks. Between-group differences were found in one of four outcomes (proportion within safe drinking guidelines). An interaction was identified between experimental manipulation and stage of change at study entry such that participant change was arrested among those more ready to change and told they were on the waiting list.

Conclusions

Trials with waiting list control conditions may overestimate treatment effects, though the extent of any such bias appears likely to vary between study populations. Arguably they should only be used where this threat to valid inference has been carefully assessed.

Peer Review reports

Background

There has been growing concern over the use of waiting list control designs in psychological and behavioral intervention research [18]. While there are ethical advantages to a waiting list design because it allows for the provision of care (if delayed) to research participants who are seeking help, whilst permitting a non-intervention evaluation, it has been noted that such designs may overestimate intervention effects [17]. This is because participants assigned to a waiting list control condition appear to improve less (or not at all) than would be expected for people who are concerned about their behavior and who are taking steps to change. In a discussion of this possibility, Miller and Rollnick point out that examination of patterns of change in participants assigned to waiting list control conditions may indicate that they perceive they are expected to ‘wait’ to change until receiving the intervention and compliantly do so [9]. This contrasts with studies not employing waiting list designs in which control group participants tend to improve [1012].

There has been some research on this topic, usually framed in the context of expectancies or demand characteristics, though there has been little dedicated study of the latter outside laboratory settings [13]. Previous studies have included a range of waiting list designs, some in which the participant is told that they will have to wait to receive treatment and others where a period of monitoring is described as a necessary baseline [14, 15]. For the rare interventions where it is impossible for the participant to know whether they are receiving an intervention (e.g., distance healing), research has also been conducted where the presence or absence of the intervention is crossed with the participant being told that they are, or are not, receiving the intervention in order to estimate the impact of expectancies [16]. Other factorial designs have also been used to evaluate unintended impacts of the research process [17].

Brief interventions for problem (i.e. hazardous or harmful) drinking is one area of research utilizing waiting list control designs [18, 19]. Given evidence that problem drinking is often resolved without treatment [10, 20], the use of a waiting list control design may be unethical if it has the effects ascribed to it. While other aspects of the research process have been evaluated in this field [21], there has been no experimental study of the effects of waiting list control conditions on participant drinking, i.e., of reactivity within this research design. In this exploratory randomized controlled trial we sought to develop a method for testing two hypotheses: Hypothesis 1: When given the same intervention material, people told that they are in the waiting list control condition will report heavier drinking at follow-up than people told that they are in the intervention condition (a main effect); Hypothesis 2: Participants who are more ready to change their drinking will report heavier drinking at follow-up compared to participants who are less ready to change (an interaction effect).

Methods

This study employed a ‘no difference’ trial paradigm in which all participants are given access to the same intervention while other aspects of the research process are experimentally manipulated [22]. Potential participants were recruited through Toronto newspaper advertisements inviting people ‘concerned about their drinking’ to help with the evaluation of self-directed interventions. The newspaper advertisement also mentioned that compensation would be provided and that the study was not a treatment program. Respondents telephoned study staff and were mailed out a consent form and baseline assessment questionnaire. Participants who returned the consent form and baseline questionnaire were randomized to be told that they were either in the: a) intervention condition and sent feedback generated from a known effective intervention, Check Your Drinking (CYD) website [2326]; or b) waiting list condition and would be sent details of the intervention in 4 weeks and provided with information about their drinking in the meantime (which was identical to the feedback generated using the CYD website). See below for the exact content of this experimental manipulation. Follow-up was conducted four weeks after baseline and participants received $20 for returning the follow-up questionnaire. Both baseline and follow-up assessments were conducted by postal questionnaire. After the follow-up questionnaire was returned, all participants were provided access to the Alcohol Help Centre (AHC) online program and participants in the ‘waiting list control condition’ were told that this was the intervention [27]. Thus, the only difference between the two groups was that those in one condition were told that they were receiving an intervention and those in the other were told that they had to wait four weeks before getting access to the intervention. The follow-up length of four weeks was partly chosen to replicate a published trial of a similar web-based intervention for problem drinkers which used a waiting list control design [18].

Text used as experimental manipulation

Text used in intervention condition: “You are in the intervention condition of this study. We have developed a personalized feedback intervention for people concerned about their drinking. We generated a Final Report for you from this intervention and it is included with this letter.”

Text used in waiting list condition: “You are in the waiting list condition of this study. You will need to wait for 4 weeks until we can send you the intervention materials. In the meantime, we have generated some personalized information about your drinking and it is included with this letter.”

Ethics

After providing a description of the study to the subjects, written informed consent was obtained. This consent procedure and the conduct of the study were approved by the standing ethics review committee of the Centre for Addiction and Mental Health.

Baseline and outcome variables and analysis plan

The outcome variables were: number of drinks in a typical recent week, largest number of drinks on one occasion, and the AUDIT-C [28], the consumption subscale of the Alcohol Use Disorders Identification Test AUDIT, [29, 30], which includes three items (frequency of drinking, number of drinks per drinking day, and frequency of five of more drinks on one occasion) and total scores range from 0 to 12. The final outcome measure was unplanned prior to the completion of the study and comprised the proportion of participants drinking within the Canadian safe drinking guideline (for males, no more than 15 drinks per week and three drinks per drinking day; for females, no more than 10 drinks per week and two drinks per drinking day) [31]. The questionnaire contained a graphic describing a standard drink which in Canada contains 13.6 grams of ethanol [31]. These variables were assessed at baseline and at follow-up. In addition, at baseline, participants provided information on their demographic characteristics, and completed the other items of the full AUDIT and the Readiness to Change (RTC) questionnaire [32, 33].

Continuous outcome variables were examined for outliers and Winsorized to normalize the distribution by replacing values more than three standard deviations from the mean with the next highest value. Analyses for the continuous variables were conducted using stepwise linear regression in which the baseline value of each outcome variable was entered in Step 1. In Step 2, participant experimental condition was entered. In addition, the Action subscale of the RTC was entered as a main effect continuous variable. Finally, in Step 3, an interaction term was added. The categorical variable, proportion of participants drinking within safe drinking guidelines, was compared between experimental conditions using Fisher’s exact test. Note that an earlier version of this analysis was conducted using logistic regression in order to allow the inclusion of the Action subscale as one of the predictors (including a main effect and an interaction term with experimental condition). However, as there was no significant (p > .05) main effect or interaction effect of the Action subscale, and because the inclusion of the Action subscale did not substantively influence the main effect of experimental condition observed, the simpler Fischer’s exact test is presented in this paper. Data missing at follow-up were not replaced to mimic the treatment of missing data employed in studies evaluating brief interventions of this type with waiting list control designs [18] and because attrition was not judged likely to be problematica.

Study participants

A total of 191 participants responded to the newspaper advertisements and returned a signed consent form and baseline questionnaire. Of these, 185 were hazardous or harmful drinkers (as defined by an AUDIT score of 8 or more) and were included in this study. Bivariate comparisons were made on demographic and baseline drinking characteristics between experimental condition and there were no significant differences (p > .05). The mean age of the 185 participants was 47.3 (SD 11.4), 70.3% were male, 55.1% had some post-secondary education, 31.4% were married or living in common law relationships, 52.4% were full or part-time employed, and 48.1% reported a family income of less than $30,000 per year. Baseline levels of problem drinking were quite severe for a community-recruited sample with a mean AUDIT score of 24.1 (SD 7.0; a score of 20 or more on the AUDIT is indicative of possible alcohol dependence). Participants reported typically consuming an average of 35.3 (SD 21.4) drinks per week, and the mean highest number of drinks consumed on one occasion in the last year was 14.7 (SD 7.5). Participants’ baseline AUDIT-C mean score was 9.0 (SD 2.0).

Results

Follow-up rates were satisfactory with 157 (85%) returning their four-week survey. An additional two participants did not complete the items for the Action subscale at baseline, leaving 155 participants available for analysis. Figure 1 provides displays an overview of the recruitment and follow-up rates of the trial.

Figure 1
figure 1

CONSORT flowchart.

There were no significant differences in follow-up rates between experimental condition (Told intervention condition = 83.5%; Told waiting list = 86.2%; p = .68). Table 1 displays the data for the three regression analyses. In order to facilitate interpretation of these analyses, Table 2 displays the means (SD) of the three continuous variables at baseline and follow-up, separated by experimental condition and a median split on participants’ baseline Action subscale scores. For the outcome variable, typical drinks per week, there was a significant interaction between experimental condition and the Action subscale (p = .05). Inspection of the estimated marginal means on Table 2 revealed that for participants who rated themselves as low on the Action subscale, condition allocation (waiting list or intervention) had little or no impact on their drinking. However, participants who scored themselves above the median on the Action subscale and who were in the waiting list condition reported drinking about 6 drinks per week more than their counterparts in the intervention condition. A similar pattern of results was observed for the largest number of drinks consumed on one occasion (p < .01). There were no significant experimental condition or interaction effects (p > .05) for AUDIT-C scores.

Table 1 Relationship of four-week follow-up drinking with intervention condition and level of action for change intent after controlling for baseline drinking
Table 2 Mean ( SD ) drinking variables at baseline and four-week follow-up by study condition (told in intervention versus told on waiting list) and median split on Action stage variable (N = 155) a

Proportion drinking within safe drinking guidelines

Despite scoring 8 or more on the AUDIT, 4.3% of participants [(8/185; or 4.5% (7/157) of participants followed-up], were drinking within safe drinking guidelines at the time of the baseline assessment. At follow-up, 20.4% (32/157) were drinking within safe drinking guidelines. There was a main effect of study condition, both on the proportion of participants drinking within safe drinking guidelines at follow-up (Fisher’s Exact test, p = .03; Told intervention condition = 27.6%; Told waiting list = 13.6%) and on the increase in proportion of those who did not drink within safe drinking guidelines at baseline but did so at follow-up (Fisher’s Exact test, p = .009; Told intervention condition = 25.0%; Told waiting list = 8.6%).

Discussion

This exploratory trial developed a research design to examine the effects of being in a waiting list control condition in psychological and behavioural intervention research. The intention behind this study design was to achieve a contrast between the effect of intervention deliberately confounded with expectancy (as is usually delivered and evaluated), versus the effect of intervention without expectancy, thereby estimating the expectancy effect as the difference between the two. Specifically, the design allows for a test of the impact of being told that the participant is in the waiting list control condition because all participants are provided with the same intervention materials (the personalized feedback report generated from the CYD online program). This design is equivalent to the subtractive expectancy placebo proposed by Suedfeld [34] where all participants are given the intervention but a randomized half is led to expect that it is inert with respect to the problem being treated. The advantage of the personalized feedback material for this particular study was that it was brief and could credibly be described as merely information to those in the waiting list control condition, while at the same time being a plausible intervention for those in the other study condition. As such, the feedback has intrinsic placebo properties and this intervention is not different in this respect from other interventions.

Mixed findings were obtained in relation to Hypothesis 1, only in the sense that while a lower proportion of participants in the control condition reported drinking within recommended guidelines at follow-up than their intervention group counterparts (an originally unplanned analysis), there were no between-group differences in the number of drinks per week, highest number of drinks on one occasion, and the heaviness of drinking measured with the AUDIT-C (the three planned outcome measures). The observed differences were in the anticipated direction and are generally coherent with an expectation that they would be statistically significant in a larger sample. Our test of Hypothesis 2 strengthens this possibility as it demonstrated heavier drinking among waiting-list control participants who were more ready to change compared to their counterparts receiving the “intervention.” This finding is consistent with the overarching hypothesised mechanism of effect, that waiting list allocation interrupts efforts at change, and also points to the importance of consideration of readiness for, or activities towards, change in this regard.

While it would be unjustified to conclude that a waiting list effect exists on the basis of statistically significant main effects from one of the four outcomes, the interaction effects observed here are provocative. For two of the continuous outcome variables, largest number of drinks on one occasion, and number of drinks in a typical week, there were interactions between experimental condition and readiness to change (started to do something about their drinking on entering the trial). It should be noted that these analyses were conducted in two different ways – the first using the categorical stages of change designation calculated using the full RTC scale [32] and the second employing just the action subscale. This repetition of analyses is justified given the exploratory nature of this trial, though the lack of an a priori constructed data analysis plan is acknowledged. Also, the finding that only the action subscale was predictive of outcome aligns with findings from previous research on Stages of Change albeit using a different measure of the construct [35].

Despite the strengths in the employed research design, one limitation may also be implicit in it. An additive model is assumed in that both groups are intended to actually read the feedback report [36]. Any differences between groups in doing so, will serve to inflate apparent waiting list effects entirely due to expectancies. Specifically, if being told that being in the intervention condition makes one more likely to read, or to think about, the intervention material, then the nature of the effect involves an interaction and is thus more complex [36]. This means that the effects observed here are contingent upon this specific feature of this study. The exploratory nature of this study also imposes various limits to inference. A power calculation was not undertaken a priori, making both the attainment of statistical significance here less important, and relatedly, clear and unambiguous interpretation of study findings more challenging. Effect estimates for future larger replication studies are now available in this study [37]. Another limitation was that there was no way to distinguish between those who were responding to the newspaper advertisements because they were looking for help regarding their drinking and those more motivated by the $20 for participating in the trial. To the extent that the financial incentive motivated participation it is likely that the observed differences under-represent the size of the true effects, particularly in light of the observed readiness to change findings. Alternatively, if the newspaper recruitment resulted in a sample that was highly motivated to change (e.g., in comparison to a proactively recruited sample) then the results of this trial could overestimate the impact of a waiting list design in such a population. The latter possibility is highly unlikely in treatment studies for help seekers, but should be borne in mind for brief intervention trials based on opportunistic recruitment in healthcare settings.

Future directions for this research include examining whether a waiting list control manipulation has more impact in particular research settings and with specific populations. For example, in situations where the manipulation is delivered face-to-face, more reactivity relative to the waiting list control condition may result. There is obvious value in examining the mechanisms behind the hypothesised negative impact of the waiting list control condition. This is particularly true for study populations of confirmed help-seekers. Qualitative interviews could also be used to investigate negative reactions of participants assigned to waiting list or other types of control conditions, particularly among those with clear preferences. Separating true expectancy effects associated with compliance with demand characteristics implicit in waiting list study conditions from participants becoming irritated or disconsolate over not getting the help they hoped to receive and reduce their own efforts to drink less (termed resentful demoralization[38]) will be a further challenge to address when this field of investigation is more developed.

Conclusions

The results of this exploratory study give further weight to the generally increasing levels of scrutiny recently given to control conditions, and for the interpretation of findings from trials employing wait-list control designs in particular. Further, these results point to the need for caution regarding the ethics of assigning participants actively ready to change to a waiting list control condition.

Endnote

aAt the suggestion of one of the reviewers, the primary analyses were re-conducted using an intention-to-treat approach (missing data at follow-up replaced with the respective baseline values). However, as the pattern of results was unchanged from that reported here, this alternative analysis was not reported in this paper.

References

  1. Hart T, Fann JR, Novack TA: The dilemma of the control condition in experience-based cognitive and behavioural treatment research. Neuropsychol Rehabil. 2008, 18 (1): 1-21. 10.1080/09602010601082359.

    Article  PubMed  Google Scholar 

  2. Barkauskas VH, Lusk SL, Eakin BL: Selecting control interventions for clinical outcome studies. West J Nurs Res. 2005, 27 (3): 346-363. 10.1177/0193945904271446.

    Article  PubMed  Google Scholar 

  3. Whitehead WE: Control groups appropriate for behavioral interventions. Gastroenterology. 2004, 126 (1 Suppl 1): S159-S163.

    Article  PubMed  Google Scholar 

  4. Gaudiano BA, Herbert JD: Methodological issues in clinical trials of antidepressant medications: perspectives from psychotherapy outcome research. Psychother Psychosom. 2005, 74 (1): 17-25. 10.1159/000082022.

    Article  PubMed  Google Scholar 

  5. Basham RB: Scientific and practical advantages of comparative design in psychotherapy outcome research. J Consult Clin Psych. 1986, 54 (1): 88-94.

    Article  CAS  Google Scholar 

  6. Hart T, Bagiella E: Design and implementation of clinical trials in rehabilitation research. Arch Phys Med Rehabil. 2012, 93 (8 Suppl): S117-S126.

    Article  PubMed  Google Scholar 

  7. Mohr DC, Spring B, Freedland KE, Beckner V, Arean P, Hollon SD, Ockene J, Kaplan R: The selection and design of control conditions for randomized controlled trials of psychological interventions. Psychother Psychosom. 2009, 78 (5): 275-284. 10.1159/000228248.

    Article  PubMed  Google Scholar 

  8. Freedland KE, Mohr DC, Davidson KW, Schwartz JE: Usual and unusual care: existing practice control groups in randomized controlled trials of behavioral interventions. Psychosom Med. 2011, 73 (4): 323-335. 10.1097/PSY.0b013e318218e1fb.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Miller WR, Rollnick S: Motivational Interviewing: Preparing People to Change Addictive Behavior. 2002, New York, NY: Guilford Press, 2

    Google Scholar 

  10. Moyer A, Finney JW: Outcomes for untreated individuals involved in randomized trials of alcohol treatment. J Subst Abuse Treat. 2002, 23: 247-252. 10.1016/S0740-5472(02)00264-7.

    Article  PubMed  Google Scholar 

  11. de Bruin M, Viechtbauer W, Schaalma HP, Kok G, Abraham C, Hospers HJ: Standard care impact on effects of highly active antiretroviral therapy adherence interventions: A meta-analysis of randomized controlled trials. Arch Intern Med. 2010, 170 (3): 240-250. 10.1001/archinternmed.2009.536.

    Article  PubMed  Google Scholar 

  12. Jenkins RJ, McAlaney J, McCambridge J: Change over time in alcohol consumption in control groups in brief intervention studies: systematic review and meta-regression study. Drug Alcohol Depen. 2009, 100 (1–2): 107-114.

    Article  CAS  Google Scholar 

  13. McCambridge J, de Bruin M, Witton J: The effects of demand characteristics on research participant behaviours in non-laboratory settings: a systematic review. PLoS One. 2012, 7 (6): e39116-10.1371/journal.pone.0039116.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  14. Borkovec TD, Grayson JB, Cooper KM: Treatment of general tension: subjective and physiological effects of progressive relaxation. J Consult Clin Psych. 1978, 46 (3): 518-528.

    Article  CAS  Google Scholar 

  15. Harris KB, Miller WR: Behavioral self-control training for problem drinkers: components of efficacy. Psych Addict Behav. 1990, 4: 82-90.

    Article  Google Scholar 

  16. Walach H, Bosch H, Lewith G, Naumann J, Schwarzer B, Falk S, Kohls N, Haraldsson E, Wiesendanger H, Nordmann A, et al: Effectiveness of distant healing for patients with chronic fatigue syndrome: a randomised controlled partially blinded trial (EUHEALS). Psychother Psychosom. 2008, 77 (3): 158-166. 10.1159/000116609.

    Article  PubMed  Google Scholar 

  17. McCambridge J, Butor-Bhavsar K, Witton J, Elbourne D: Can research assessments themselves cause bias in behaviour change trials? A systematic review of evidence from solomon 4-group studies. PLoS One. 2011, 6 (10): e25223-10.1371/journal.pone.0025223.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  18. Hester RK, Squires DD, Delaney HD: The Drinker’s Check-up: 12-month outcomes of a controlled clinical trial of a stand-alone software program for problem drinkers. J Subst Abuse Treat. 2005, 28 (2): 159-169. 10.1016/j.jsat.2004.12.002.

    Article  PubMed  Google Scholar 

  19. Postel MG, de Haan HA, ter Huurne ED, Becker ES, de Jong CA: Effectiveness of a web-based intervention for problem drinkers and reasons for dropout: randomized controlled trial. J Med Internet Res. 2010, 12 (4): e68-10.2196/jmir.1642.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Cunningham JA: Resolving alcohol-related problems with and without treatment: the effects of different problem criteria. J Stud Alcohol. 1999, 60 (4): 463-466.

    Article  CAS  PubMed  Google Scholar 

  21. McCambridge J, Kypri K: Can simply answering research questions change behaviour? Systematic review and meta analyses of brief alcohol intervention trials. PLoS One. 2011, 6 (10): e23748-10.1371/journal.pone.0023748.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  22. Kypri K, McCambridge J, Wilson A, Attia J, Sheeran P, Bowe S, Vater T: Effects of Study Design and Allocation on participant behaviour - ESDA: study protocol for a randomized controlled trial. Trials. 2011, 12 (1): 42-10.1186/1745-6215-12-42.

    Article  PubMed  PubMed Central  Google Scholar 

  23. Cunningham JA, Wild TC, Cordingley J, van Mierlo T, Humphreys K: A randomized controlled trial of an internet-based intervention for alcohol abusers. Addiction. 2009, 104 (12): 2023-2032. 10.1111/j.1360-0443.2009.02726.x.

    Article  PubMed  PubMed Central  Google Scholar 

  24. Doumas DM, Hannah E: Preventing high-risk drinking in youth in the workplace: a web-based normative feedback program. J Subst Abuse Treat. 2008, 34 (3): 263-271. 10.1016/j.jsat.2007.04.006.

    Article  PubMed  Google Scholar 

  25. Doumas DM, Haustveit T: Reducing heavy drinking in intercollegiate athletes: evaluation of a web-based personalized feedback program. Sport Psychol. 2008, 22: 213-229.

    Google Scholar 

  26. Doumas DM, McKinley LL, Book P: Evaluation of two Web-based alcohol interventions for mandated college students. J Subst Abuse Treat. 2009, 36 (1): 65-74. 10.1016/j.jsat.2008.05.009.

    Article  PubMed  Google Scholar 

  27. Cunningham JA: Comparison of two internet-based interventions for problem drinkers: randomized controlled trial. J Med Internet Res. 2012, 14 (4): e107-10.2196/jmir.2090.

    Article  PubMed  PubMed Central  Google Scholar 

  28. Dawson DA, Grant BF, Stinson FS, Zhou Y: Effectiveness of the derived Alcohol Use Disorders Identification Test (AUDIT-C) in screening for alcohol use disorders and risk drinking in the US general population. Alcohol Clin Exp Res. 2005, 29 (5): 844-854. 10.1097/01.ALC.0000164374.32229.A2.

    Article  PubMed  Google Scholar 

  29. Babor TF, De La Fuente MF, Saunders JB, Grant M: AUDIT - The Alcohol use Disorders Identification Test: Guidelines for use in Primary Health Care. 1989, Geneva, Switzerland: World Health Organization

    Google Scholar 

  30. Saunders JB, Aasland OG, Babor TF, De La Fuente JR, Grant M: Development of the Alcohol Use Disorders Identification Test (AUDIT): WHO collaborative project on early detection of persons with harmful alcohol consumption— II. Addiction. 1993, 88: 791-804. 10.1111/j.1360-0443.1993.tb02093.x.

    Article  CAS  PubMed  Google Scholar 

  31. Butt P, Beirness D, Cesa F, Gliksman L, Paradis C, Stockwell T: Alcohol and Health in Canada: A Summary of Evidence and Guidelines for Low-Risk Drinking. 2011, Ottawa: Canadian Centre on Substance Abuse

    Google Scholar 

  32. Rollnick S, Heather N, Gold R, Hall W: Development of a short ‘readiness to change’ questionnaire for use in brief, opportunistic interventions among excessive drinkers. Br J Addict. 1992, 87: 743-754. 10.1111/j.1360-0443.1992.tb02720.x.

    Article  CAS  PubMed  Google Scholar 

  33. Heather N, Hönekopp J: A revised edition of the Readiness to Change Questionnaire. 2008, Newcastle: Northumbria University

    Google Scholar 

  34. Suedfeld P: The subtractive expectancy placebo procedure: a measure of non-specific factors in behavioural interventions. Behav Res Ther. 1984, 22 (2): 159-164. 10.1016/0005-7967(84)90104-9.

    Article  CAS  PubMed  Google Scholar 

  35. Bertholet N, Cheng DM, Palfai TP, Samet JH, Saitz R: Does readiness to change predict subsequent alcohol consumption in medical inpatients with unhealthy alcohol use?. Addict behav. 2009, 34 (8): 636-640. 10.1016/j.addbeh.2009.03.034.

    Article  PubMed  PubMed Central  Google Scholar 

  36. McCambridge J, Kypri K, Elbourne D: In randomisation we trust? Possible problems in experimenting with people in behavioural intervention trials. J Clin Epidemiol.  -in press

  37. Campbell M, Fitzpatrick R, Haines A, Kinmonth AL, Sandercock P, Spiegelhalter D, Tyrer P: Framework for design and evaluation of complex interventions to improve health. BMJ. 2000, 321 (7262): 694-696. 10.1136/bmj.321.7262.694.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  38. Cook TD, Campbell DT: Quasi-Experimentation: Design and Analysis Issues for field Settings. 1979, Chicago: Rand McNally

    Google Scholar 

Pre-publication history

Download references

Acknowledgements

During the conduct of this research, John Cunningham was supported as the Canada Research Chair on Brief Interventions for Addictive Behaviours. Kypros Kypri is supported by a National Health & Medical Research Council Senior Research Fellowship (APP1041867) and a Senior Brawn Fellowship from the University of Newcastle Jim McCambridge is supported by a Wellcome Trust Research Career Development fellowship in Basic Biomedical Science (WT086516MA).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to John A Cunningham.

Additional information

Competing interests

The authors have no competing interests to declare.

Authors’ contributions

JAC, KK, and JM conceived of the study. JAC conducted the study and the analyses. JAC wrote the first draft of the paper. KK and JM revised subsequent drafts of the paper. All authors have read and approved the final version of the manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Cunningham, J.A., Kypri, K. & McCambridge, J. Exploratory randomized controlled trial evaluating the impact of a waiting list control design. BMC Med Res Methodol 13, 150 (2013). https://doi.org/10.1186/1471-2288-13-150

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1471-2288-13-150

Keywords