Skip to main content

A randomised trial and economic evaluation of the effect of response mode on response rate, response bias, and item non-response in a survey of doctors

Abstract

Background

Surveys of doctors are an important data collection method in health services research. Ways to improve response rates, minimise survey response bias and item non-response, within a given budget, have not previously been addressed in the same study. The aim of this paper is to compare the effects and costs of three different modes of survey administration in a national survey of doctors.

Methods

A stratified random sample of 4.9% (2,702/54,160) of doctors undertaking clinical practice was drawn from a national directory of all doctors in Australia. Stratification was by four doctor types: general practitioners, specialists, specialists-in-training, and hospital non-specialists, and by six rural/remote categories. A three-arm parallel trial design with equal randomisation across arms was used. Doctors were randomly allocated to: online questionnaire (902); simultaneous mixed mode (a paper questionnaire and login details sent together) (900); or, sequential mixed mode (online followed by a paper questionnaire with the reminder) (900). Analysis was by intention to treat, as within each primary mode, doctors could choose either paper or online. Primary outcome measures were response rate, survey response bias, item non-response, and cost.

Results

The online mode had a response rate 12.95%, followed by the simultaneous mixed mode with 19.7%, and the sequential mixed mode with 20.7%. After adjusting for observed differences between the groups, the online mode had a 7 percentage point lower response rate compared to the simultaneous mixed mode, and a 7.7 percentage point lower response rate compared to sequential mixed mode. The difference in response rate between the sequential and simultaneous modes was not statistically significant. Both mixed modes showed evidence of response bias, whilst the characteristics of online respondents were similar to the population. However, the online mode had a higher rate of item non-response compared to both mixed modes. The total cost of the online survey was 38% lower than simultaneous mixed mode and 22% lower than sequential mixed mode. The cost of the sequential mixed mode was 14% lower than simultaneous mixed mode. Compared to the online mode, the sequential mixed mode was the most cost-effective, although exhibiting some evidence of response bias.

Conclusions

Decisions on which survey mode to use depend on response rates, response bias, item non-response and costs. The sequential mixed mode appears to be the most cost-effective mode of survey administration for surveys of the population of doctors, if one is prepared to accept a degree of response bias. Online surveys are not yet suitable to be used exclusively for surveys of the doctor population.

Peer Review reports

Background

Surveys of medical practitioners can provide important policy-relevant data and information that is often not captured by administrative data or registration databases. There is some suggestion that response rates for surveys of medical practitioners may be falling, with important implications for statistical inference, and for the extent to which results can be generalised and used to inform policy [1–3].

There is growing evidence in the literature about the most effective interventions to increase response rates in general population and doctor surveys. Interventions to improve response rates include incentive-based approaches (e.g. money, gifts, lottery and prize draws), design-based approaches (e.g. survey length, follow-up, content) and mode of administration (e.g. paper, internet, interview). Three key factors that influence doctors' decisions to complete a survey are the opportunity cost of their time; their trust that the results will be used appropriately; and the perceived relevance of the survey [4].

Although the literature about factors influencing response rates is growing, there are three important gaps that this paper aims to address: i) a lack of evidence on the use of mixed mode survey designs; ii) a lack of evidence examining response bias and item non-response, in addition to response rate, and iii) a lack of evidence on the cost-effectiveness of different strategies.

The use of online and web-based surveys is growing, including those where email is the method of contact. Web-based surveys may seem attractive as there are no printing or data-entry costs, but response bias may be an issue, particularly if older respondents are less likely to respond, and a lack of trust in the security of transmitting information over the internet may reduce response rates and increase item non-response [2]. For doctors, emailed surveys have resulted in lower response rates than mailed surveys [4]. This is also reflected in the use of email in non-doctor populations, where meta analyses have found that web-based surveys, mostly using email contact, have a 10-11% lower response rate compared to other modes [5, 6]. This is despite the fact that email surveys that include a weblink reduce the number of steps (and time) to complete a survey. However, evidence also shows that email contact can be impersonal and reduce response rates [7]. For doctors, there is an issue of whether the email will reach the respondent or be initially read by administrative staff who may not forward such emails to respondents, though this may also be an issue if mailed surveys are posted to their work address.

Furthermore, the use of different types of mixed mode surveys for doctors has not yet been investigated thoroughly [4, 8]. This is important if, for example, younger respondents are more likely to respond to an online survey, whilst older respondents are more likely to respond to a mailed survey. Accounting for doctors' preferences about which survey mode to complete may be important. For example, in a survey of doctors in the United States, paper surveys were preferred to email surveys when they were given the choice [9], and family physicians preferred mail surveys compared to surgeons [10]. The ability of doctors to choose their preferred mode of response to fit with their busy schedules is likely to be important [4, 11]. Evidence from non-doctor populations suggests that offering a choice of mode does not increase response rates, but that the sequencing or switching of modes (e.g. paper followed by online) may matter [12–14]. A paper examining this for US physicians showed that mail first, followed by a web survey, had a higher response rate than web followed by mail [8].

Different modes of administration may also influence survey response bias (whether those responding are representative of the population) and item non-response (the extent to which all questions have been completed) as well as overall survey response rates. Response rates are frequently regarded as sentinel indicators of methodological quality in general, and representativeness in particular [15]. Although response rates are often used as a 'conventional proxy' for response bias, there is in fact no necessary relationship between response rate and response bias [16–19]. Despite this, less than half (44%) of published surveys of doctors discuss response bias, and only 18% provided some analysis of it [20]. Item non-response is also an issue, with respondents less likely to answer sensitive questions and some skipping whole sections, depending on how the survey has been designed and administered [21]. High item non-response was found in a web survey, compared to a face-to-face survey, of university students [22], whilst health professionals who were younger, male, and worked in hospitals were more likely to complete a web survey than a mailed survey [23].

There is also a lack of rigorous evidence on the cost-effectiveness of the many different approaches to improve response rates and reduce bias [24]. Email and web surveys may seem cheaper than mailed surveys, and the effects on costs for mixed mode surveys are less clear. Researchers often have limited resources and adoption of all possible measures to increase response rates is usually not possible due to cost constraints and ethical considerations, especially when the study population or sample is widely dispersed. For these reasons, researchers must make choices as to which method leads to the largest increase in response rate (or other outcome) for each dollar spent. For example, up-front financial incentives may be the most effective, but are also costly compared with other approaches [7, 25–27]. Baron et al examined the effect of a lottery for GPs in Canada and found a 6.4% increase in the response rate at a cost of $CAD16 per additional returned survey [28]. Bjertnaes et al examined the effects and costs of the number of reminders in a survey of Norwegian physicians, and found that costs per response increased dramatically with telephone follow up [29]. Erdogan and Baker (2002) examined costs and effects of different methods of follow-up in a sample of advertising agency executives, but compared average costs and effect rather than incremental costs and effect [30]. A study that compared the costs of a mail and email survey in a group of academics found the email survey's costs were lower but that mail had a 12% higher response rate [31].

The aim of this study is to conduct a randomised trial and economic evaluation of an online survey compared to two types of mixed mode. Our choice of modes reflects the importance to doctors of being able to choose which mode to fill out, and the importance of a personalised mailed letter sent to their preferred mailing address (rather than their work address) as the main mode of contact rather than email. In all three modes, our method of contact was by mailed personalised letter. Three response modes were compared (Figure 1): (i) Online mode: a mailed personal invitation letter asked doctors to logon to a secure website to fill out an online version of the questionnaire. Respondents could request a paper copy by phone/fax/email or they could print out a paper questionnaire after they logged on to the website. They were sent a reminder letter around three weeks later that again included login details. (ii) Sequential mixed mode: as above, but a paper questionnaire and reply-paid envelope was included with the reminder letter three weeks later; and (iii) Simultaneous mixed mode: a paper questionnaire and reply-paid envelope was sent out with the invitation letter, which also contained login details and so respondents could alternatively choose to fill out the survey online if they wished. A reminder letter was sent three weeks later with login details only and no paper survey. Primary outcome measures were response rate, survey response bias and item response. An economic evaluation comparing the costs of each mode of administration was also conducted by applying the results from the trial to the expected costs of the full main wave survey.

Figure 1
figure 1

Description of mode of initial contact and follow up contact for the three arms of the trial.

Our hypotheses are that:

  1. 1).

    online mode will result in a lower response rate and higher item non-response, compared with the two mixed modes;

  2. 2).

    sequential mixed mode will have a higher response rate than simultaneous mixed mode;

  3. 3).

    the costs of the online mode will be lower than the two mixed modes; and

  4. 4).

    the costs of the sequential mixed mode will be lower than the simultaneous mixed mode.

Methods

A randomised trial was conducted as part of the third and final pilot survey for Wave 1 of the Medicine in Australia: Balancing Employment and Life (MABEL) longitudinal cohort/panel study of the dynamics of the medical labour market in Australia, focusing on workforce participation and its determinants among Australian doctors [32]. The first wave of data collection, establishing the baseline cohort for the study, was undertaken in 2008.

The questionnaire included eight sections: job satisfaction, attitudes to work and intentions to quit or change hours worked; a discrete choice experiment (DCE) examining preferences and trade-offs for different types of jobs; characteristics of work setting (public/private, hospital, private practice); workload (hours worked, on-call arrangements, number of patients seen, fees charged); finances (income, income sources, superannuation); geographic location; demographics (including specialty, qualifications, residency); and family circumstances (partner and children). There were four versions of the survey, each differing slightly in order to tailor them to the type of doctor: GPs, specialists, specialists-in-training, and hospital non-specialists. Although survey length also matters for response rates, the context of the survey and research questions being tested required a long questionnaire, to ensure that sufficient data were collected to adequately test study hypotheses [32]. The length ranged from 58 questions in an eight-page booklet (for specialists-in-training), to 87 questions in a 13-page booklet (for specialists). In all modes, doctors in remote and rural areas (defined using the Rural, Remote and Metropolitan Area (RRMA) classification to doctors in RRMA 6 (Remote centre with population > 5,000) and RRMA7 (Other remote centre with population < 5,000)), mainly GPs, were given a cheque for $100 that was enclosed with the invite letter to recognise both their importance from a policy perspective, and the significant time pressures on these doctors. The purpose was to draw meaningful inferences about recruitment and retention in rural and remote areas. Pre-paid monetary incentives, not conditional on response, have been shown to double response rates [25]. The survey described in this paper was also the main pilot survey for the main wave of MABEL, so it was important to pilot the administration of these incentives. However, they did not influence the outcome of this trial as randomisation ensured approximately equal numbers of cheques going out in each arm of the trial.

The process of logging in and completing the survey online was kept as simple as possible. Users were directed to the main web page (http://www.mabel.org.au), where they clicked on the 'Login' link, which directed them to a login page where they entered their username and password. They were then directed to the first page of the survey. Respondents could save their responses and logout, and then login again to complete the survey, and they could skip questions. Once logged in, the padlock icon was visible, indicating that the website was secure.

The primary outcomes of interest in the trial were response rates, survey response bias with respect to age, gender, doctor type and geographic location, and item-response (the percentage of completed items). A three-arm parallel trial design was used with equal randomisation across arms. The sample size for the trial was calculated to detect a difference of 5% in the response rate at the 95% level of statistical significance, and with a power of 80%. This indicated that a sample of 900 doctors in each arm of the trial would be required, 2,700 doctors in total. This represented just under 5% (2700/54,160 = 0.04985) of all doctors undertaking clinical practice on the Australian Medical Publishing Company's (AMPCo) Medical Directory, which includes all doctors in all States and Territories of Australia and formed our sampling frame. This national database is used extensively for mailing purposes (e.g. the Medical Journal of Australia). The Directory is updated regularly using a number of sources. AMPCo receives 58,000 updates to doctors' details per year, through biannual telephone surveys, and checks medical registration board lists, Australian Medical Association membership lists and Medical Journal of Australia subscription lists to maintain accuracy. The directory contains a number of key characteristics that can be used for checking the representativeness of the sample and to adjust for any response bias in sample weighting. These characteristics include age, gender, location, and job description (used to group doctors into the four types).

A 4.9% stratified random sample of doctors was therefore taken, with stratification by four doctor types (general practitioners (GPs), specialists, doctors enrolled in a specialist training program, and non-specialist hospital doctors (including interns and salaried medical officers)), and six rural/remoteness categories (Rural, Remote and Metropolitan Area (RRMA) classification). This produced a list of 2,702 doctors. Doctors in this sample were then randomly allocated to a response mode by AS using random numbers generated in STATA. The AMPCo unique identifiers for each of the three groups were sent to AMPCo who conducted the mailing of invitation letters, and survey materials were mailed in late February 2008. Survey invitation letters indicated the University of Melbourne and Monash University as responsible for the survey. AMPCo also provided individual-level data on the population of doctors so we could examine response bias. Doctors were aware of which mode they had been allocated to on receipt of the invitation letter. The survey manager (AL) recorded responses and organised data entry and was blinded to group allocation. SH analysed the data and was not blinded to group allocation. Analysis was by 'intention to treat'.

Analysis included comparisons of response rates; estimation of means and proportions of respondents by age, gender, doctor type, and geographic location compared to the doctor population; logistic regression of response bias; and comparisons of the proportion of missing values (item non-response). The statistical significance of the differences between the response rates of the three response modes was analysed using a probit model with response (0/1) as the dependent variable and two dummy variables for response mode, with online as the reference category. The difference between the sequential and simultaneous modes was tested using the restriction that their coefficients be equal. Although respondents were randomly allocated across modes, it is still important to test whether any specific/particular respondent characteristics influenced the response rate. The probit model therefore included age, gender, doctor type and geographic location. Survey response bias was examined using a multinomial logit model of respondents (= 1) and the total population of doctors (= 0), with age, gender, geographic location and doctor type as independent variables. For item non-response, a comparison of the proportion of completed items was supplemented using generalised linear models that controlled for differences due to age, gender, geographic location and doctor type [33]. Analysis of geographic location was based on the Australian Standard Geographic Classification (AGSC) Accessibility and Remoteness Index for Australia (ARIA)[34].

The economic evaluation compared the costs of consumables (rental of AMPCo list, printing of surveys, letters, fax forms, further information fliers and reply-paid envelopes, mail-house processing costs, postage and data entry) across each mode of survey administration. The costs of researcher and staff time were the same for each mode, as each mode required the development of both the paper and web survey, and time liaising with AMPCo and the printers. These costs were therefore not included in the comparison of costs between modes. The expected costs for the main wave 1 survey were estimated based on sending out a survey to all doctors on the AMPCo database (n = 54,168) and using the response rates from the randomised trial to estimate the number of respondents. Data on the three primary outcome measures are presented alongside data on costs.

Results

Responses were received between March and October 2008. The characteristics of doctors in the three groups for the study sample are shown in Table 1. Although there are some small differences of up to three percentage points, the three groups are broadly similar in terms of key characteristics. The comparison of response rates across modes is shown in Table 2. Response rates were between 6 and 7 percentage points higher for the two mixed modes, compared to online (Table 2). Table 3 shows response rates by mode and doctor type. Specialists had the highest overall response rates and GPs the lowest. Response rates for simultaneous and sequential mixed modes were between 2 and 6 percentage points higher for GPs, and between 10 and 15 percentage points higher for specialists. For hospital non-specialists, the response rate for simultaneous mixed mode was four percentage points higher than online, but four percentage points lower than sequential mixed mode. For specialists in training, the simultaneous mixed mode had the lowest response rate, with the sequential and online modes producing similar results.

Table 1 Group characteristics of full sample
Table 2 Response rates by mode of administration
Table 3 Response rates by mode of administration and doctor type1 (in %)

The difference in response rates across modes was statistically significant (Table 4). The table reports the marginal effects of each response mode compared with the online response, and can be interpreted as percentages. Controlling for other factors, the simultaneous mixed mode had a response rate 7 percentage points higher than online, and the sequential mixed mode was 7.7 percentage points higher than online. The effect of sequential mixed mode was not significantly different from simultaneous mixed mode (χ2 = 0.16, p = 0.69). Specialists were 16 and 13 percentage points more likely to respond to the simultaneous and sequential mixed modes respectively, than to the online mode. Differences for other types of doctor were not statistically significant. The probit model also controls for differences in age, gender, doctor type and geographic area on response rate. Overall, females were less likely to respond, and the specialists' response rate was 5.3 percentage points higher than GPs. GPs in outer regional and very remote areas were more likely to respond than those in major cities; this was partly due to a $100 financial incentive provided to doctors in outer regional and very remote areas.

Table 4 The effect of mode on response rates (probit regression model)

Doctors allocated to each mode were given the opportunity to complete the survey online or on paper. Table 5 shows that of those allocated to the simultaneous mixed mode, 21% chose to complete the survey online, whilst of those in the online group only 3% requested and filled out a paper survey. Doctors allocated to the sequential mixed mode group were more likely to fill out a paper survey (62%) than an online one (38%).

Table 5 Actual mode of response by allocated survey mode

Response bias was examined for each mode by comparing the characteristics of respondents to each mode with the population of all doctors in Australia. This was undertaken using a multinomial logit model with four outcomes: simultaneous, sequential, online and population. Table 6 shows the odds ratios for those comparisons and factors that were statistically significant at the 95% level. For example, both mixed modes showed evidence of response bias, whilst the characteristics of online respondents were the same as the population. Those who filled out the simultaneous mixed mode were twice as likely to be specialists when compared to the population, whilst those filling out the sequential mixed mode were more likely to be older and more likely to be in a non-metropolitan area when compared to the population. The results also show that specialists were 2.4 times more likely to complete the simultaneous mixed mode than the online, and that those aged over 60 and in inner regional areas were more likely to complete the sequential mixed mode than the online mode.

Table 6 Response bias for each mode (Odds ratios and 95% CI)1

Item non-response was examined by calculating the average percentage of items completed, and the percentage of respondents who completed all relevant questions, i.e. whether the percentage of items completed = 100% (Table 7). If a question was 'not applicable', this was counted as a completed question. The order of the sections in the survey is reflected in these tables, with the job satisfaction section coming first. Note that the online survey allowed respondents to skip questions, as would be the case in the paper survey. Overall, the online mode shows the lowest average percentage of items completed, with almost 89% of questions answered compared to around 92% for each of the other modes. This is the case for all sub-sections of the survey, with the section on finances, which includes income questions, having the lowest average percentage of items completed of 80%. This difference is statistically significant, as shown in the first half of Table 8, with odds ratios of 1.48 for paper and 1.53 for mixed mode compared to online. Table 8 also shows statistically significant differences for some sections of the survey. The sequential mixed mode was more likely to have a higher percentage of items completed than the online for the sections on DCE, workload, and location. The simultaneous mixed mode was more likely to have a higher percentage of items completed than the online in the 'About You' and 'Family' sections.

Table 7 Item response by mode and questionnaire section (%)
Table 8 Item non-response by mode (Odds ratio, 95% CI)

Although the percentage of questions completed overall was 91.4%, only 2.9% of respondents completed every question, and this was lowest for simultaneous mixed mode (1.1%), followed by sequential mixed mode (2.2%), and was highest for online mode (6.9%) (Table 7). These differences were statistically significant (second half of Table 8), with odds ratios of 0.13 for paper mode and 0.23 for mixed mode compared to online. The proportion of respondents completing all questions was similar across the modes for each section. Those using simultaneous mode were significantly more likely to complete all questions in the 'Family' section compared to those in the online mode (Table 8).

The costs of each mode were estimated for the first wave of the survey, which was to be sent out to the population of doctors in Australia (Table 9). The online mode has the lowest total cost, followed by the sequential mixed mode, with simultaneous mixed mode having the highest cost. The total cost of the simultaneous mixed mode is 38% higher than online, and 21% higher than sequential mixed mode. The sequential mixed mode total costs are 14% higher than the online mode. The main sources of cost differences between modes are related to handling and postage of the mail-out, printing of surveys, and data entry for paper copies.

Table 9 Effect of mode on survey costs (2008 prices, in $AU)

Table 9 shows incremental cost-effectiveness ratios with respect to changes in response rate and number of responses. Comparing sequential and simultaneous mixed modes to online, sequential mixed mode was the most cost-effective relative to online. Costs were $6.07 per additional response, and $AUS3, 290 per 1% increase in the response rate, compared to online. Although the main outcomes were similar for the two mixed modes, the sequential mode was cheaper due to lower printing, mailing and data entry costs. Using the sequential mixed mode resulted in total costs which were 21% lower than for the simultaneous mode, with no detrimental impact on response rate, survey response bias, or item non-response.

Discussion

This study has compared response rates, survey response bias, item non-response, and costs across three modes of conducting a doctor survey. Mailing a letter inviting respondents to complete the questionnaire online, followed by a mailed reminder letter and paper copy of the survey, was the most cost-effective mode of administration. Although online modes were less costly, due to lower printing and data entry costs, and did not exhibit evidence of increased response bias, the response rates and item completion rates were lower than for the sequential and simultaneous mixed modes. The online mode had lower item completion rates in the sections on the DCE, workload, and personal and family characteristics. Although the sequential mode is the most cost-effective with respect to response rates, whether this is chosen would depend on the weight given to the existence of response bias when compared to the population of doctors.

We find no support for the hypothesis that offering a simultaneous choice of modes results in lower response rates than a sequenced choice of mode. Literature from non-doctor populations suggests that sequencing may be better than simultaneous choice [12, 13]. Though there is a small difference in response rates for both mixed modes, this is not statistically significant.

Lower response rates in the online mode arguably reflect the population being surveyed, their familiarity with and trust in the internet, and reliability of access to the internet, especially in remote regions of Australia. Most doctors choose to fill out a paper questionnaire, possibly suggesting that they are less comfortable with doing a survey online or have issues with sending confidential information over the internet. This is reflected in a higher rate of item non-response for most sections of the survey, especially for the more personal questions. This finding occurred despite assurances about confidentiality and the fact that information was being sent over a secure internet connection. Doctors may also prefer the 'portability' of a paper-copy which they can fill out at the office, at home or whilst travelling. Online modes are also becoming more portable (i.e. not confined to the desktop PC) with the use of laptops, touch screen tablets and other mobile devices, so the preference for a paper copy may erode over time. A key issue in relation to survey response is the need to minimise the opportunity cost of survey completion for respondents. The need for internet access and the time it takes to logon needs to be balanced against filling out a paper survey that needs to be posted. A potential reason for the lower online response rate was the need by respondents to find a website and login using their username and password provided in the letter. Once at the website, they had to go to a login page, enter their details, and were then directed to the beginning of the survey. Though this takes time compared to an email survey with an embedded website link, it does provide a more secure process that may have increased the confidence of respondents in the security of the website.

Response rates in all three arms could be regarded as low, an increasing issue for surveys of doctors [1–3]. It is noteworthy that our comparative analysis with the population of Australian doctors showed that the mode with the lowest response rate (online) was the most representative, confirming the point noted in the introduction, that response rate and response bias are separate issues and should both be explicitly analysed to ensure appropriate interpretation.

Our study used a diverse sample of doctors with respect to age, specialty and geographic location, increasing the generalisability of the results. Although the trial was not designed for sub-group analysis, specialists in training allocated to the online mode had a higher response rate (17.5%) than those allocated to the simultaneous mixed mode (13.89%), and a similar response rate to those in the sequential mixed mode (18.52%). For those conducting surveys of younger doctors and doctors in training, who are more likely to be familiar with and trusting of the internet, online surveys may be a more desirable option, though item non-response may be an issue. However, specialists had the highest response rate for the simultaneous version (26.2% compared with 22.9% for mixed) and the lowest response for the online mode (11.2%). The routine use of exclusively online surveys for the population of doctors may therefore be some time off, at least until the current older cohorts have been replaced by younger cohorts.

The unit costs of printing and survey administration are likely to vary across geographic locations and companies, though they are not likely to vary across modes within geographic locations, and so should not influence our findings. Printing costs vary greatly with volume, such that for the pilot online mode the unit cost per printed questionnaire (for those requesting a paper survey) was $AUD5.90. However, the unit cost of printing 54,169 paper questionnaires (for the ensuing main wave survey) was $AUD0.32. The relationship between unit costs and volume printed is not linear. The costs of establishing the online survey will also vary across settings, although there are now many low cost survey packages available that cover most needs, some of which can be re-programmed if necessary.

Our results are in line with other research showing that lower response rates are likely to result from online surveys than mailed surveys [5, 6]. Other studies have compared mail and online mixed modes in non-doctor samples [12, 13]. However, these studies have not examined costs. There are many different types of response mode, and different combinations of mixed modes, that can potentially be used in surveys of doctors. Further research is required in a number of areas. First, comparisons are needed of modes that offer choice compared to those that do not [14]. Second, all comparisons need to include an examination of the changes in costs. This is mentioned frequently in the literature as a motivation for using online and mixed modes, but there is hardly any evidence of the differences in costs.

Conclusion

Our study is the first, in the context of a large national survey of doctors, to include an economic evaluation alongside a randomised trial using standardised methods. Of the alternatives compared in our study, the sequential mixed mode had the lowest cost per response compared to online. Decisions on the appropriate response mode will ultimately be a function of the study objectives and context, but for large national surveys of the doctor population that include doctors at different stages of their career, the sequential mixed mode seems to be the preferred option.

References

  1. Barclay S, Todd C, Finlay I, Grande G, Wyatt P: Not another questionnaire! Maximising the response rate, predicting non-response and assessing non-response bias in postal questionnaire studies of GPs. Family Practice. 2002, 19: 105-111. 10.1093/fampra/19.1.105.

    Article  PubMed  Google Scholar 

  2. Aitken C, Power R, Dwyer R: A very low response rate in an on-line survey of medical practitioners. Australian & New Zealand Journal of Public Health. 2008, 32: 288-289. 10.1111/j.1753-6405.2008.00232.x.

    Article  Google Scholar 

  3. Grava-Gubins I, Scott S: Effects of various methodologic strategies Survey response rates among Canadian physicians and physicians-in-training. Canadian Family Physician. 2008, 54: 1424-1430.

    PubMed  PubMed Central  Google Scholar 

  4. VanGeest JB, Johnson TP, Welch VL: Methodologies for Improving Response Rates in Surveys of Physicians: A Systematic Review. Evaluation & the Health Professions. 2007, 30: 303-321. 10.1177/0163278707307899.

    Article  Google Scholar 

  5. Shih TH, Fan XT: Comparing response rates from Web and mail surveys: A meta-analysis. Field Methods. 2008, 20: 249-271. 10.1177/1525822X08317085.

    Article  Google Scholar 

  6. Manfreda KL, Bosniak M, Berzelak J, Haas I, Vehovar V: Web surveys versus other survey modes - A meta-analysis comparing response rates. International Journal of Market Research. 2008, 50: 79-104.

    Google Scholar 

  7. Edwards PJ, Roberts I, Clarke MJ, DiGuiseppi C, Wentz R, Kwan I, Cooper R, Felix LM, Pratap S: Methods to increase response to postal and electronic questionnaires. Cochrane Database of Systematic Reviews. 2009, 3

  8. Beebe TJ, Locke GR, Barnes SA, Davern ME, Anderson KJ: Mixing Web and Mail Methods in a Survey of Physicians. Health Services Research. 2007, 42: 1219-1234. 10.1111/j.1475-6773.2006.00652.x.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Kroth PJ, McPherson L, Leverence R, Pace W, Daniels E, Rhyne RL, Williams RL, Consortium PN: Combining Web-Based and Mail Surveys Improves Response Rates: A PBRN Study From PRIME Net. Annals of Family Medicine. 2009, 7: 245-248. 10.1370/afm.944.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Parsons JA, Warnecke RB, Czaja RF, Barnsley J, Kaluzny A: Factors associated with response rates in a national survey of primary care physicians. Evaluation Review. 1994, 18: 756-766. 10.1177/0193841X9401800607.

    Article  Google Scholar 

  11. McMahon SR, Iwamoto M, Massoudi MS, Yusuf HR, Stevenson JM, David F, et al: Comparison of e-mail, fax and postal surveys of pediatricians. Pediatrics. 2003, 111: e299-e303. 10.1542/peds.111.4.e299.

    Article  PubMed  Google Scholar 

  12. Converse PD, Wolfe EW, Huang XT, Oswald FL: Response rates for mixed-mode surveys using mail and e-mail/Web. American Journal of Evaluation. 2008, 29: 99-107. 10.1177/1098214007313228.

    Article  Google Scholar 

  13. de Leeuw ED: To mix or not to mix data collection modes in surveys. Journal of Official Statistics. 2005, 21: 233-255.

    Google Scholar 

  14. Millar M, O'Neill A, Dillman D: Are mode preferences real?. 2009, Pullman: Washington State University

    Google Scholar 

  15. Asch DA, Jedrziewski MK, Christakis NA: Response rates to mail surveys published in medical journals. Journal of Clinical Epidemiology. 1997, 50: 1129-1136. 10.1016/S0895-4356(97)00126-1.

    Article  CAS  PubMed  Google Scholar 

  16. Schoenman JA, Berk ML, Feldman JJ, Singer A: Impact of differential response rates on the quality of data collected in the CTS physician survey. Evaluation & the Health Professions. 2003, 26: 23-42. 10.1177/0163278702250077.

    Article  Google Scholar 

  17. Groves RM, Peytcheva E: The impact of nonresponse rates on nonresponse bias: A meta-analysis. Public Opinion Quarterly. 2008, 72: 167-189. 10.1093/poq/nfn011.

    Article  Google Scholar 

  18. Lynn P: The problem of nonresponse. International handbook of survey methodology. Edited by: de Leeuw ED, Hox JJ, Dillman DA. 2008, New York: Lawrence Erlbaum Associates, 35-55.

    Google Scholar 

  19. Schouten B, Cobben F, Bethlehem J: Indicators for the representativeness of survey response. Survey Methodology. 2009, 35: 101-113.

    Google Scholar 

  20. Cummings SM, Savtitz SM, Konrad TR: Reported response rates to mailed physician questionnaires. Health Services Research. 2001, 35: 1347-1355.

    CAS  PubMed  PubMed Central  Google Scholar 

  21. Bosnjak M, Tuten TL, Wittmann WW: Unit (non)response in Web-based access panel surveys: An extended planned-behavior approach. Psychology & Marketing. 2005, 22: 489-505. 10.1002/mar.20070.

    Article  Google Scholar 

  22. Heerwegh D, Loosveldt G: Face-to-face versus web surveying in a high-internet-coverage populatiuon. Differences in response quality. Public Opinion Quarterly. 2008, 72: 836-846. 10.1093/poq/nfn045.

    Article  Google Scholar 

  23. Lusk C, Delclos GL, Burau K, Drawhorn DD, Aday LA: Mail versus Internet surveys - Determinants of method of response preferences among health professionals. Evaluation & the Health Professions. 2007, 30: 186-201. 10.1177/0163278707300634.

    Article  Google Scholar 

  24. Drummond M, Sculpher MJ, Torrance GW, O'Brien BJ, Stoddart GL: Methods for the economic evaluation of health care programmes. 2005, Oxford University Press, Third

    Google Scholar 

  25. Edwards P, Roberts I, Clarke M, Di Guiseppi C, Pratap S, Wentz R, Kwan I: Increasing response rates to postal questionnaires: systematic review. British Medical Journal. 2002, 324: 1183-1192. 10.1136/bmj.324.7347.1183.

    Article  PubMed  PubMed Central  Google Scholar 

  26. Larson PD, Chow G: Total cost/response rate trade-offs in mail survey research: impact of follow-up mailings and monetary incentives. Industrial Marketing Management. 2003, 32: 533-537. 10.1016/S0019-8501(02)00277-8.

    Article  Google Scholar 

  27. James KM, Ziegenfuss JY, Tilburt JC, Harris AM, Beebe TJ: Getting Physicians to Respond: The Impact of Incentive Type and Timing on Physician Survey Response Rates. Health Services Research. 2011, 46: 232-242. 10.1111/j.1475-6773.2010.01181.x.

    Article  PubMed  PubMed Central  Google Scholar 

  28. Baron G, De Wals P, Milord F: Cost-effectiveness of a lottery for increasing physicians' responses to a mail survey. Evaluation & the Health Professions. 2001, 24: 47-52. 10.1177/01632780122034777.

    Article  CAS  Google Scholar 

  29. Bjertnaes OA, Garratt A, Botten G: Nonresponse Bias and Cost-Effectiveness in a Norwegian Survey of Family Physicians. Evaluation & the Health Professions. 2008, 31: 65-80.

    Article  Google Scholar 

  30. Erdogan BZ, Baker MJ: Increasing mail survey response rates from an industrial population - A cost-effectiveness analysis of four follow-up techniques. Industrial Marketing Management. 2002, 31: 65-73. 10.1016/S0019-8501(00)00117-6.

    Article  Google Scholar 

  31. Shannon DM, Bradshaw CC: A comparison of response rate, response time, and costs of mail and electronic surveys. Journal of Experimental Education. 2002, 70: 179-192. 10.1080/00220970209599505.

    Article  Google Scholar 

  32. Joyce CM, Scott A, Jeon S-H, Humphreys J, Kalb G, Witt J, Leahy A: The "Medicine in Australia: Balancing Employment and Life (MABEL)" longitudinal survey - Protocol and baseline data for a prospective cohort study of Australian doctors' workforce participation. BMC Health Services Research. 2010, 10: 50-10.1186/1472-6963-10-50.

    Article  PubMed  PubMed Central  Google Scholar 

  33. Papke LE, Wooldridge JM: Econometric Methods for fractional response variables with an application to 401(k) plan participation rates. Journal of applied econometrics. 1996, 11: 619-632. 10.1002/(SICI)1099-1255(199611)11:6<619::AID-JAE418>3.0.CO;2-1.

    Article  Google Scholar 

  34. Australian Bureau of Statistics: ASGC remoteness classification: purpose and use. 2003, Canberra: ABS

    Google Scholar 

Pre-publication history

Download references

Acknowledgements and Funding

Funding was provided from a National Health and Medical Research Council Health Services Research Grant (454799) and the Commonwealth Department of Health and Ageing. None of the funders had a role in the data collection, analysis, interpretation or writing of this paper. The views in this paper are those of the authors alone. We thank the doctors who gave their valuable time to participate in MABEL, and the other members of the MABEL team for data cleaning and comments on drafts of this paper: Terence Cheng, Daniel Kuehnle, Matthew McGrail, Michelle McIsaac, Stefanie Schurer, Durga Shrestha and Peter Sivey. The study was approved by the University of Melbourne Faculty of Economics and Commerce Human Ethics Advisory Group (Ref. 0709559) and the Monash University Standing Committee on Ethics in Research Involving Humans (Ref. CF07/1102 - 2007000291). De-identified data from MABEL are available from http://www.mabel.org.au.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Catherine M Joyce.

Additional information

Competing interests

All authors have completed the Unified Competing Interest form at http://www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declare that none of the authors have financial or non-financial interests that may be relevant to the submitted work.

Authors' contributions

AS, CJ, GK, JH, JW, SJ jointly conceived of this work. All authors participated in designing and developing the study instruments and procedures. AS drafted the paper, and SJ maintained the dataset and conducted the statistical analysis. All authors assisted with re-drafting sections of the paper and interpreting results. AL organised the administration of the survey and data entry. All authors approved the final version of the manuscript. AS is the guarantor.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions

Open Access This article is published under license to BioMed Central Ltd. This is an Open Access article is distributed under the terms of the Creative Commons Attribution License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Scott, A., Jeon, SH., Joyce, C.M. et al. A randomised trial and economic evaluation of the effect of response mode on response rate, response bias, and item non-response in a survey of doctors. BMC Med Res Methodol 11, 126 (2011). https://doi.org/10.1186/1471-2288-11-126

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1471-2288-11-126

Keywords