Skip to main content

Response rates to a mailed survey of a representative sample of cancer patients randomly drawn from the Pennsylvania Cancer Registry: a randomized trial of incentive and length effects

Abstract

Background

In recent years, response rates to telephone surveys have declined. Online surveys may miss many older and poorer adults. Mailed surveys may have promise in securing higher response rates.

Methods

In a pilot study, 1200 breast, prostate and colon patients, randomly selected from the Pennsylvania Cancer Registry, were sent surveys in the mail. Incentive amount ($3 vs. $5) and length of the survey (10 pages vs. 16 pages) were randomly assigned.

Results

Overall, there was a high response rate (AAPOR RR4 = 64%). Neither the amount of the incentive, nor the length of the survey affected the response rate significantly. Colon cancer surveys were returned at a significantly lower rate (RR4 = 54%), than breast or prostate surveys (RR4 = 71%, and RR4 = 67%, respectively; p < .001 for both comparisons). There were no significant interactions among cancer type, length of survey and incentive amount in their effects on response likelihood.

Conclusion

Mailed surveys may provide a suitable alternative option for survey-based research with cancer patients.

Peer Review reports

Background

In the last several years, researchers have reported declining response rates to telephone surveys across many areas of study [1–5]. While Web-based surveys are becoming increasingly popular as an alternative because they have been shown to result in quicker responses than mailed surveys, average response rates are not high [6]. In addition, their administration can be expensive and the representativeness of many Internet population frames is problematic [7]. There is evidence in the literature that some populations are less likely to use the Internet. For example, those over age 55 are still less likely to use the Internet than their younger counterparts [8, 9]. Internet surveys may also miss those of lower socio-economic status [7].

Phone surveys are typically more expensive than either mailed or web surveys, as they require trained interviewers to make the calls. Research comparing modes has found that telephone surveys result in higher likelihood of obtaining extreme positive results due to recency effects [10]. In addition, traditional sampling using RDD methods has faced increasing challenges as more households rely primarily on cell-phones [11, 12]. As a result, some research shows that response rates for mailed surveys can be much higher than those administered by phone or Internet [6, 13, 14].

Given the advantage over web surveys in coverage and the potential cost savings compared to phone surveys, we were interested in using a mailed survey to collect data from patients with three types of cancers. The purpose of the following study was to test the feasibility of using a mailed survey to gather information from recently diagnosed cancer patients, and to test whether specific procedures would affect response rates. We piloted mailed survey methods on a sample of 1,200 cancer patients randomly chosen from the Pennsylvania Cancer Registry and experimented with several strategies to determine how to achieve the highest response rate for the lowest cost. Two different incentive amounts ($3 and $5) and two survey lengths (33 questions--or 10 pages--and 61 questions--or 16 pages) were tested with breast, prostate and colon cancer patients.

Increasing response rates

Much research has investigated the effects of manipulating specific features (i.e., anonymity; the color; number of follow-up mailings) of a mailed survey to increase response rates or reduce non-response bias [15–28]. One of the most frequently studied features is the inclusion of monetary incentives [17–19]. While there is evidence for the benefit of using incentives [15, 17, 19–23], researchers have not yet determined an ideal denomination [24]. Some researchers have found that the increase in response rate is not necessarily monotonically related to incentive amount [18, 24–26]. Warriner et al. (1996) found $5 to be more effective than $2, but no less effective than $10 [27]. James & Bolstein (1992) showed that response rates increased from $.25 to $.50 and from $.50 to $1, but not between $1 and $2 [18]. In a meta-analysis of randomized controlled trials of monetary incentives, Edwards, Cooper, Roberts and Frost found that the pooled odds ratios for response per $.01 of incentive decreased monotonically as the maximum amount of incentive increased [22]. Dillman provides a theoretical explanation, which suggests that incentive amounts have diminishing returns as the amount approaches the actual value of the service being performed, at which point people perceive answering the survey as more of an economic exchange than a social exchange and have an easier time refusing the money [28].

A second feature manipulated was the length of the survey. Studies on the effect of length on response rates have had mixed results [29]. Some studies, including a review of 200 surveys on patient satisfaction have found no effect [17, 30–32]. However, in studies in which the difference between two survey versions was more dramatic, significant effects have resulted [28, 33, 34]. Two meta analyses of clinical trials found that shorter questionnaires increased the likelihood of response [35, 36]. However, Edwards, Roberts, Sandercock & Frost (2004) found that the effects were larger when the questionnaire was short to begin with (which is not the case in this study).

It was also likely that there would be differences in return rates between the three types of cancer. There is not much guidance in the literature about whether patients with one type of cancer might be more likely to respond than others, as the majority of surveys focus on one type of cancer. While a recent study did find that breast cancer patients responded at a higher rate than those with prostate or colon cancer, this study had not been published at the time we were designing our experiment [37]. It is also possible that any of these three factors (incentive, survey length and cancer type) may interact to affect response rates.

And lastly, media coverage of specific cancers, gender differences [38–40], age [39], and disease severity/characteristics [40] are all factors that could potentially affect response rates. Some researchers have argued that the effects of incentives may be contingent upon demographic group [41, 42].

Based on the literature and this logic, this study included a number of hypotheses and research questions. The hypotheses included: A $5 incentive will result in higher response rates than a $3 incentive; a longer survey will have a lower response rate than a shorter survey. Research questions included: How will response rates differ by type of cancer? Will length of survey and incentive amount interact with each other or with type of cancer in affecting response rate? Will incentive amount interact with specific demographic characteristics, such as race or marital status?

Methods

Participants

The sample was drawn from the entire population of patients with cancers of the breast (females only), prostate (males only) or colon (males and females), reported to the Pennsylvania Cancer Registry in 2005 in time to have their data compiled by July of 2006 (approximately 55% of the total number of incident cases in 2005, which was 20,200 across the three cancers; see Figure 1). The age range was not restricted. A sample of 400 people was randomly chosen for each of the three cancers. This was sufficient power to detect differences greater than 5% in response rates between relevant treatment groups (alpha of .05/power at 80%) if the response rate was around 60%.

Figure 1
figure 1

Consort Flow DiagramTables.

Procedure

The design was a two by two, factorial experiment, with incentive amount ($3 vs. $5) and length of the survey (short vs. long) as independent factors. The short version contained 33 questions (10 pages), compared to 61 for the long version (16 pages). The measures excluded from the short version were similar in style and difficulty to those that were retained.

The surveys were designed in Adobe InDesign and printed in one color (blue) with tinting. A glossy cover was added and the result was an 8 1 2 by 11-inch booklet. The content of the surveys for each cancer was identical except for minor wording changes to four of 61 questions. The mailing procedures followed recommendations from Dillman [15]. Procedures were approved by the University of Pennsylvania Institutional Review Board and the Pennsylvania Cancer Registry.

Each cancer patient was randomized to one of four conditions: short survey, $3; short survey, $5; long survey, $3 or long survey, $5. Randomization was conducted using a random number generator in SPSS. Only the research director had access to the master list. Each subject was assigned a code corresponding to one of the four conditions. These codes were placed on the inside back cover of the survey for identification upon their return.

For the incentives, 3 one-dollar bills or 1 five-dollar bill were used. Questionnaire items were packaged by members of the research team and other graduate research assistants who signed confidentiality forms approved by the IRB (as they would be seeing the names and addresses on envelopes).

Mailing procedure

On September 1st, an introductory letter explaining that a survey would be sent in a few days, and a brochure about the Pennsylvania Cancer Registry was mailed. The letter included information indicating that the respondent could opt out of completing the survey, and providing procedures for requesting to be dropped from the study. On September 6th, the first copy of the survey, a second letter, a business reply envelope, and the cash incentive was mailed. The return of the mailed survey served as evidence of consent.

Two weeks later, a reminder letter and another copy of the survey (as well as a business reply envelope) were sent to those from whom no survey had yet been received. No additional cash incentive was included with the third mailing. All mailings were sent via first class mail, to ensure timeliness.

Measures

Dependent variables

Overall response. Response rates were calculated four months after the first mailing was sent (in early January), using American Association for Public Opinion Research (AAPOR) standard definitions [43]. For the experimental analyses, AAPOR response rate 2 was used. It allows for the inclusion of both complete and partial questionnaires. Response rate 2 divides the number of questionnaires by the number of refusals and non-refusals. It does not include in the denominator cases that are known to be ineligible -- those who were reported as deceased by a family member (n = 22) or who contacted us to let us know they did not have cancer (n = 27). However, response rate 2 underestimates the true response rate by an unknown amount. It includes in the denominator both all cases which did not return the questionnaire and all cases for whom the post office returned the questionnaires because the addressee was unknown. It is plausible that some of those who did not respond were deceased. To respond to this concern, an adjusted response rate was calculated corresponding to AAPOR's response rate 4, correcting the rate for those estimated to be ineligible to respond, which, in this instance, is reduced by an estimate of those likely to be deceased. Two sources of mortality were taken into account: the expected mortality from all causes given the age distribution of our sample based on CDC published estimates [44], and the incremental mortality specific to their cancer diagnosis and stage. Cancer-specific incremental mortality estimates were based on SEER data on 1- and 2-year survival rates for each of the three cancers [40]. By interpolation, estimates were derived for survival at 16 months, the mean number of months between diagnosis and receipt of the questionnaire (Estimated mortality: 5% for breast, 4% for prostate and 21% for colon). These adjusted response rates (RR4) are reported alongside RR2.

Statistical Analysis

Results of descriptive analyses presenting response rate by demographic group are shown in Table 1. Summary analyses (presented in Table 2) report RR2 for each condition and RR4 for the three cancers and the total sample. While the overall response rate can be adjusted on the basis of assumptions about mortality among non-respondents, more fine-grained analyses required assigning each case to a response category. Thus, while overall claims about response rates reflect RR4, experimental analyses and demographic group comparisons focus only on these respondents used also for response rate 2. Logistic regression analyses determined the effect of survey length, incentive amount, type of cancer, and interactions among these variables on response rate (Table 3). All regression analyses included three blocks. The first block, the only one reported in Table 3, included the main effects of incentive amount, length of survey, and type of cancer (and for the demographic analyses, all of the demographic variables). Type of cancer was dummy coded with colon cancer patients serving as the comparison group. The second block added the two-way interactions for incentive amount by length of survey, incentive amount by type of cancer, and length of survey by type of cancer (for the demographic analyses, all two-way interactions between these variables and demographics). The third block added the three-way interactions as a forced entry block.

Table 1 Demographic characteristics of respondents by condition
Table 2 Response rates by incentive amount, length and cancer type
Table 3 Results of logistic regression predicting response from incentive amount, survey length and type of cancer

Post hoc analyses were conducted to determine whether those receiving the long survey or the smaller incentive skipped more items than those in the other conditions. Also, for the 33 questions in the short survey, respondents were compared by condition to see if the quality of responses varied. Means and standard deviations were computed for each condition and t-tests conducted to check for significant differences. Additional analyses test differences between responders and non-responders using chi squares and non-parametric tests of medians (for the age variable).

Results

Respondent characteristics

Demographics of respondent are presented in Table 1.

Response rates

The overall estimated RR4 response rate was 67%. The average RR2 rate usable for the experimental analyses across the 12 conditions was 62%, ranging from 42% for colon cancer participants receiving $5 and the long survey to 72% for prostate cancer participants receiving $3 and the short survey (See Table 2).

Logistic regression analyses predicting RR2 response rates revealed no significant differences between participants who received the $3 incentive and participants who received the $5 incentive (OR = .99, p = 0.93) (see Table 3). There was a trend, but not a significant tendency for the short survey length to achieve a higher response rate (OR = .79, p = .06). However, type of cancer did influence response rate. Colon cancer participants responded significantly less often than both breast cancer and prostate cancer participants (p < .001 for both comparisons). These differences were still significant when controlling for age, race, marital status and stage of cancer. There were no differences between breast cancer and prostate cancer participants (OR = 1.15, p = .35). None of the two-way interactions - between incentive amount by length; incentive amount by type of cancer; type of cancer by length of survey - nor the three-way interaction for type of cancer by length of survey by amount of incentive were significant (results not shown).

Demographics and cancer stage

Analyses revealed significant differences for age (p = .002), race (p = .001), and stage of cancer (p = .001) on response rate (see Table 4). Older individuals ( > =65 years) were less likely to respond than younger individuals ( < =64 years; response rates (RR2) were 58% and 67%, respectively). Whites were more likely to respond than non-Whites (RR2 = 64%, compared to 51%). Individuals with metastatic cancer were significantly less likely to respond (RR2 = 44%) than individuals at other stages (RR2 = 66%). There were no significant interactions of incentive amount or length with any of the demographic variables.

Table 4 Response rates (%) for demographic groups by condition

Post hoc analyses

In order to assess non-response bias, we compared participants who responded and participants who did not on gender (for colon cancer only), age at diagnosis, marital status, race and stage of cancer (see Table 5). Median age for respondents and non-respondents was compared using non-parametric tests of medians. Chi square analyses were conducted for all other variables. The results suggest that despite the high response rate, there were some biases in the sample. This result led us to correct for bias by oversampling stage 4 and African American respondents for subsequent data collection.

Table 5 Comparison of respondents and non-respondents on demographic characteristics

In addition, analyses were conducted to determine whether survey length or level of incentive was associated with quality of responses, including skipping more questions or answering questions differently. Only 18% skipped any questions. Thus while there was a significant difference by incentive amount in the proportion of people who skipped any questions, the actual number skipped in any condition was so low that there was no practical concern about the difference. There were only chance differences in substantive responses to the questions by incentive and length - examining response to the 33 questions in common across the four surveys.

Discussion

Overall, the results indicated that a small cash incentive and appropriate recruitment procedures can produce strong response rates to a lengthy mailed survey among cancer patients. Our overall adjusted response rate was 67%. The rates we report here are slightly higher, but comparable to some prior studies, which have achieved response rates of 60-65% [45, 46]. However, many of the studies that have achieved the highest response rates have used convenience samples or other non-representative populations. In addition, this study was undertaken more than 10 years after the referenced studies, a period when a decline in response rates to all forms of surveys has been a major concern of researchers. The achieved rate is substantially higher than other studies have found for mailed surveys in the last decade [13, 37, 47]. In a review of 141 academic papers describing 175 separate studies published in management and behavioral sciences in the years 1975, 1985 and 1995, an average response rate of 55.6% was estimated [47]. Evidence that a mailed survey drawing from a statewide registry in 2006 was able to achieve this rate despite the reported declines in response rates is noteworthy. The only parallel evidence we could find for mailed cancer patient surveys comes from a study in the Netherlands in 2005 using a single hospital's registry, but it also achieved a response rate of 62% [48].

Interestingly, the amount of incentive ($3 versus $5) was not a significant factor predicting response. In fact, the response rates are virtually the same between participants who received $3 and participants who received $5. The finding that the high response rate was received with a relatively small incentive amount is encouraging for future research.

While there was a tendency for the shorter questionnaire to earn a higher response rate, this was not a statistically significant difference. This lack of significant difference was surprising, given the drastic difference between the two surveys. The post hoc results suggest that length and incentive amount did not affect item response either.

The adjusted response rates between breast cancer participants and prostate cancer participants are comparable (73% vs. 69.5%, respectively). However, colon cancer participants responded at a lower rate (57%) than both breast cancer and prostate cancer participants. The pattern among the three cancers is consistent with the large mixed mode ACS survey of survivors, for which 42% of breast, 35% of prostate and 30% of colon cancer patients responded [37]. This pattern of response may be partly explained by the differential level of morbidity associated with these cancers, which may affect ability to respond.

An alternative interpretation relies not on the actual inability to respond but the emotions evoked by the surveys. According to the Leverage-Saliency Theory of Survey Participation, the achieved influence of a particular feature is a function of how important it is to the potential respondent, whether its influence is positive or negative, and how salient it becomes to the sample person during the presentation of the survey request [49]. A survey which reminds a patient that he or she has a cancer with relatively higher morbidity and less positive prognosis (colon cancer) may result in a lower response rate than a survey about a cancer evoking emotions about a health condition with a better prognosis (prostate or breast).

Similarly both of these explanations (actual inability and psychological reluctance) may also explain why patients at more advanced stages of disease were less likely to respond (45% for metastatic versus 66% for others). Colon cancer patients were more likely to be at higher stage at diagnosis (14% metastatic versus 5% for breast and 2% for prostate).

The results by demographics provided some valuable information for the sampling plan in the larger subsequent study. For example, because non-whites and those with stage 4 cancers consistently responded less often across conditions, those two groups were over-sampled in order to ensure sufficient sub-group numbers for the larger study. The lack of evidence for interactions between demographics and condition were encouraging. Even if $3 and short surveys were used, the heterogeneity of the sample would not be compromised.

The high rates of response despite secular declines in response rates may have several explanations. Two possible explanations are discussed here: First, the procedures followed those recommended by Dillman quite closely. Second, these questionnaires were sent to cancer patients who had been diagnosed in the previous calendar year. This was a high salience issue for the patients. Also, the questions that were asked were primarily about their experience in trying to choose treatments and survive with their cancers, and about their use of public information sources, as well as medical sources of information. Respondents may have been appreciative of the opportunity to discuss these topics.

These results are highly generalizable to other cancer patient research contexts given the selection of a representative sample and mailed survey implementation followed standard recommended procedures, but subject to the limitations described below.

Limitations

The study had several limitations. The fact that the study lacked a $0 incentive control group means it is not possible to conclude that the incentive did not matter, only that there was no difference between the effect of $3 vs. $5 on response rates. The reason for the lack of difference between the $3 and $5 conditions may be that the difference between the two amounts was too small to affect behavior. Had the denominations been $2 and $6 or $0 and $5, there may have been a better chance of detecting differences. A more systematic study would include several different dollar amounts to test differences in the effects between them. However, it is also important to consider cost-effectiveness. For example, a higher dollar amount might result in a slightly higher response rate, but the increase may not justify the additional cost. Future research should investigate the incremental costs per questionnaire returned.

It is possible that the lack of differences reflect cancer patients' sense of altruism or the need to try to give something back to others who might be going through a similar experience. In addition, we know that salience of the survey topic increases the likelihood for response [49]. Leverage-Salience Theory can again explain these findings. In this case, the salience of the survey topic may have been more important than any incentive amount and, thus, diminished the effect of the incentive.

A second limitation is that both response rates exclude twenty-seven people who called or sent letters to say they did not have cancer. It is plausible that some patients who truly do have cancer may deny it, may consider their conditions pre-cancerous (such as those with stage 0 colon cancer or ductal carcinoma in situ). It is also possible that some patients may have been mistakenly reported to the PCR (misdiagnosed). In fact, those designated by the PCR as having a Stage 0 cancer did respond at a slightly lower rate than those with stage 1 and stage 2 cancers (58% compared to 65% and 67%, though this was not a significant difference). It is impossible to determine whether any of these people denying the cancer diagnosis may have been misclassified as ineligible. However, this group is 2% of the total sample and thus the effect on the substantive results reported here is negligible.

Despite the high response rates, there is a strong possibility that those least likely to respond to a survey regarding cancer are the people who have fared the poorest in terms of their treatment or who have had the most negative experiences. Groves et al. (2006) write that salience does not always have a positive effect: "When the topic of the survey ... generates negative thoughts, unpleasant memories or reminders of embarrassing personal failings, then the topic may suppress participation despite its personal relevance [3]." When the reasons for non-response are correlated with variables within the survey, non-response bias represents a significant threat to validity. However, the relatively high response rates overall help provide confidence that those biases have been minimized and that a fairly representative sample of the Pennsylvania patient population with these three types of cancer has been included. The lower response rate from stage 4 patients may be an exception to this claim.

Conclusions

As telephone surveys continue to achieve low response rates and until Web methods can reach a less biased sample, particularly of older and poorer populations, and call on appropriate sampling frames of internet addresses for cancer patients, mailed surveys might provide a promising alternative for reaching cancer patients. The evidence from the survey presented here suggests that even when the burden of questions is high and the incentive amount is small (just $3), high response rates can be achieved.

Author contributions

BK wrote the background and literature review, conducted some analyses and wrote the discussion section. RH conceived of the idea for the experiment, advised on the approach for response rate calculations and edited and revised several drafts. TF conducted data analyses, provided tables and reviewed drafts. All authors read and approved the final manuscript.

Abbreviations

AAPOR American Association for Public Opinion Research. RR2 AAPOR's standard definition for Response Rate 2 allows for the inclusion of both complete and partial questionnaires. Response rate 2 divides the number of questionnaires by the number of refusals and non-refusals. It does not include cases that are known to be ineligible. RR4 AAPOR's Response Rate 4:

which reduces the denominator of the RR estimator to eliminate those likely to be ineligible. In this instance the estimate of non-respondents for all reasons is reduced by those likely to be deceased.

References

  1. Brick JM, Dipko S, Presser S, Tucker C, Yuan Y: Non-response bias in a dual frame sample of cell and landline numbers. Public Opin Q. 2006, 70: 780-3. 10.1093/poq/nfl031.

    Article  Google Scholar 

  2. Curtin R, Presser S, Singer E: Changes in telephone survey non-response over the past quarter century. Public Opin Q. 2005, 69: 87-98. 10.1093/poq/nfi002.

    Article  Google Scholar 

  3. Groves RM, Couper MP, Presser S, Nelson : Experiments in producing nonresponse bias. Public Opin Q. 2006, 70: 720-36. 10.1093/poq/nfl036.

    Article  Google Scholar 

  4. Keeter S, Kennedy C, Dimock M, Best J, Craighill P: Gauging the impact of growing non-response on estimates from a national RDD telephone survey. Public Opin Q. 2006, 70: 759-79. 10.1093/poq/nfl035.

    Article  Google Scholar 

  5. Tuckel P, O'Neill H: The vanishing respondent in telephone surveys. J Advert Res. 2002, 42: 26-48.

    Article  Google Scholar 

  6. Shannon DM, Bradshaw CC: A comparison of response rate, response time, and costs of mail and electronic surveys. Journal of Exper Ed. 2002, 70: 179-192. 10.1080/00220970209599505.

    Article  Google Scholar 

  7. Couper MP: Web Surveys: A review of issues and approaches. Public Opin Q. 2000, 64: 464-494. 10.1086/318641.

    Article  CAS  PubMed  Google Scholar 

  8. Dickinson A, Newell AF, Smith MJ, Hill RL: Introducing the Internet to the over-60s: Developing an email system for older novice computer users. Interact Comput. 2005, 17: 621-642. 10.1016/j.intcom.2005.09.003.

    Article  Google Scholar 

  9. Lam JCY, Lee MKO: Digital inclusiveness--Longitudinal study of Internet adoption by older adults. Journal of Management Information Systems. 2006, 22: 177-206. 10.2753/MIS0742-1222220407.

    Article  Google Scholar 

  10. Dillman DA, Phelps G, Tortorab R, Swift K, Kohrell J, Berck J: Response rate and measurement differences in mixed-mode surveys using mail, telephone, interactive voice response (IVR) and the Internet. Social Science Research. 2001, 38: 1-18. 10.1016/j.ssresearch.2008.03.007.

    Article  Google Scholar 

  11. Kempf AM, Remington PL: New challenges for telephone survey research in the twenty-first century. Annu Rev Public Health. 2007, 28: 113-26. 10.1146/annurev.publhealth.28.021406.144059.

    Article  PubMed  Google Scholar 

  12. Link MW, Battaglia MP, Frankel MR, Osborn L, Mokdad AH: Address-based versus random-digit-dial surveys: comparison of key health and risk indicators. Am J Epidemiol. 2006, 164: 1019-1025. 10.1093/aje/kwj310.

    Article  PubMed  Google Scholar 

  13. Link MW, Mokdad AH, Kulp D, Hyon A: Has the national do not call registry helped or hurt state-level response rates?. Public Opin Q. 2005, 70: 794-809. 10.1093/poq/nfl030.

    Article  Google Scholar 

  14. McHorney CA, Kosinski M, Ware JE: Comparisons of the costs and quality of norms for the SF-36 health survey collected by mail versus telephone interview: results form a national survey. Medical Care. 1994, 32: 551-567. 10.1097/00005650-199406000-00002.

    Article  CAS  PubMed  Google Scholar 

  15. Dillman DA: Mail and Internet Surveys: The tailored design method. 2000, New York, NY: Wiley

    Google Scholar 

  16. Dillman DA, Sinclair MD, Clark JR: Effects of questionnaire length, respondent-friendly design, and a difficult question on response rates for occupant-addressed census mail surveys. Public Opin Q. 1993, 57: 289-304. 10.1086/269376.

    Article  Google Scholar 

  17. Fox RJ, Crask MR, Kim J: Mail survey response rate: A meta-analysis of selected techniques for inducing response. Public Opin Q. 1998, 52: 467-491. 10.1086/269125.

    Article  Google Scholar 

  18. James JM, Bolstein R: The effect of monetary incentives and follow-up mailing on the response rate and response quality in mail surveys. Public Opin Q. 1990, 54: 346-361. 10.1086/269211.

    Article  Google Scholar 

  19. Hawes JM, Crittenden VL, Crittenden WF: The effects of personalization, source and offer on mail survey response rate and speed. Akron Bus Econ Rev. 1987, 18: 54-63.

    Google Scholar 

  20. Erwin WJ: Improving mail survey response rates through the use of a monetary incentive. J Mental Health Counseling. 2002, 24: 247-55.

    Google Scholar 

  21. Yammarino FJ, Skinner SJ, Childers TL: Understanding mail survey response behavior: A meta-analysis. Public Opin Q. 1991, 55: 613-619. 10.1086/269284.

    Article  Google Scholar 

  22. Edwards P, Cooper R, Roberts I, Frost C: Meta-analysis of randomized clinical trials of monetary incentives and response to mailed questionnaires. J Epidemiol Community Health. 2005, 59: 987-999. 10.1136/jech.2005.034397.

    Article  PubMed  PubMed Central  Google Scholar 

  23. Beebe TJ, Davern ME, McAlpine DD, Call KT, Rockwood TH: Increasing response rates in a survey of Medicaid enrollees: the effect of a prepaid monetary incentive and mixed modes (mail and telephone). Med Care. 2005, 43: 411-414. 10.1097/01.mlr.0000156858.81146.0e.

    Article  PubMed  Google Scholar 

  24. Trussell N, Lavrakas PJ: The influence of incremental increases in token cash incentives on mail survey response: Is there an optimal amount?. Public Opin Q. 2004, 68: 349-67. 10.1093/poq/nfh022.

    Article  Google Scholar 

  25. King KA, Pealer LN, Bernard AL: Increasing response rates to mail questionnaires: A review of inducement strategies. Am J of Health Educ. 2001, 32: 4-15.

    Article  Google Scholar 

  26. Warriner K, Goyder J, Gjertsen H, Hohner P, McSpurren K: Charities, no; lotteries, no; cash, yes: Main effects and interactions in a Canadian incentives experiment. Public Opin Q. 1996, 60: 542-562. 10.1086/297772.

    Article  Google Scholar 

  27. Jepsen C, Asch D, Hershey J, Ubel P: In a mailed physician survey, questionnaire length had a threshold effect on response rate. J Clin Epidemiol. 2004, 58: 103-105. 10.1016/j.jclinepi.2004.06.004.

    Article  Google Scholar 

  28. Dillman DA: Mail and telephone surveys: The total design method. 1978, New York, NY: Wiley and Sons

    Google Scholar 

  29. Deutskens E, Ruyter KD, Wetzels M, Oosterveld P: Response rate and response quality of Internet-based surveys: An experimental study. Marketing Letters. 2004, 15: 21-10.1023/B:MARK.0000021968.86465.00.

    Article  Google Scholar 

  30. Jacoby A: Possible factors affecting response to postal questionnaires: Findings from a study of general practitioner services. J Public Health. 1990, 12: 131-135.

    CAS  Google Scholar 

  31. Mond JM, Rodgers B, Hay PJ, Owen C, Beumont PJ: Mode of delivery, but not questionnaire length, affected response in an epidemiological study of eating-disordered behavior. J Clin Epidemiol. 2004, 57: 1167-71. 10.1016/j.jclinepi.2004.02.017.

    Article  CAS  PubMed  Google Scholar 

  32. Sitzia J, Wood N: Response rate in patient satisfaction research: an analysis of 210 published studies. Int J Qual Health Care. 1998, 10: 311-317. 10.1093/intqhc/10.4.311.

    Article  CAS  PubMed  Google Scholar 

  33. Kalantar JS, Talley NJ: The effects of lottery incentive and length of questionnaire on health survey response rates: a randomized study. J Clin Epidemiol. 1999, 52: 1117-22. 10.1016/S0895-4356(99)00051-7.

    Article  CAS  PubMed  Google Scholar 

  34. Eaker S, Bergström R, Bergström A, Adami H-O, Nyren O: Response Rate to Mailed Epidemiologic Questionnaires: A Population-based Randomized Trial of Variations in Design and Mailing Routines. Am J Epidemiol. 1998, 147: 74-82.

    Article  CAS  PubMed  Google Scholar 

  35. Edwards P, Roberts I, Clarke M, DiGiuseppe C, Pratap S, Wentz R, Kwan I: Increasing response rates to postal questionnaires: Systematic review. Br Med J. 2002, 18: 324-

    Google Scholar 

  36. Edwards P, Roberts I, Sandercock P, Frost C: Follow-up by mail in clinical trials: Does questionnaire length matter?. Controlled Clin Trials. 2004, 25: 31-52. 10.1016/j.cct.2003.08.013.

    Article  PubMed  Google Scholar 

  37. Smith T, Stein KD, Mehta CC, Kaw C, Kepner J, Stafford J, Baker F: The rationale, design and implementation of the American Cancer Society's Studies of cancer survivors. Cancer. 2007, 190: 1-12. 10.1002/cncr.22387.

    Article  Google Scholar 

  38. Goldstein BI, Levitt AJ: A gender-focused perspective on health service utilization in comorbid bipolar I disorder and alcohol use disorders: results from the national epidemiologic survey on alcohol and related conditions. J Clin Psychol. 2006, 67: 925-932.

    Google Scholar 

  39. Redondo-Sendino A, Guallar-Castillon P, Banegas J, Rodriguez-Artalejo F: Gender differences in the utilization of health-care services among the older adult population of Spain. BMC Public Health. 2006, 6: 155-10.1186/1471-2458-6-155.

    Article  PubMed  PubMed Central  Google Scholar 

  40. Ries L, Melbert D, Krapcho M, Mariotto A, Miller BA, Feuer EJ, Clegg L, Horner MJ, Howlader N, Eisner MP, Reichman M, Edwards BK, eds: SEER Cancer Statistics Review, 1975-2004. [http://seer.cancer.gov/csr/1975_2004/]

  41. Martin E, Abreu D, Winters F: Money and motive: Effects of incentives on panel attrition in the Survey of Income and Program Participation. J Off Stat. 2001, 17: 267-284.

    Google Scholar 

  42. Singer E, Gebler N, Trivellore R, Van Hoewyk J, McGonagle K: The effect of incentives in interviewer-mediated surveys. J Off Stat. 1999, 15: 217-230.

    Google Scholar 

  43. Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys. 2006, American Association for Public Opinion Research, [http://www.aapor.org/standards]4

    Google Scholar 

  44. Compressed Mortality File 1999-2004. CDC WONDER On-line Database, compiled from Compressed Mortality File 1999-2004 Series 20 No. 2J, 2007. [http://wonder.cdc.gov/cmf-icd10.html]

  45. Asch DA, Jedrziewski MK, Christakis NA: Response rates to mail surveys published in medical journals. J Clin Epid. 1997, 50: 1129-1136. 10.1016/S0895-4356(97)00126-1.

    Article  CAS  Google Scholar 

  46. Guidry JJ, Aday L, Zhang D, Winn RJ: Cost considerations as potential barriers to cancer treatment. Cancer Pract. 1998, 6: 182-187. 10.1046/j.1523-5394.1998.006003182.x.

    Article  CAS  PubMed  Google Scholar 

  47. Baruch Y: Response rate in academic studies: A comparative analysis. Hum Relat. 1999, 52: 421-438.

    Google Scholar 

  48. Jansen SJ, Otten W, Baas-Thijssen MC, van de Velde CJ, Nortier JW, Stiggelbout AM: Explaining differences in attitude toward adjuvant chemotherapy between experienced and inexperienced breast cancer patients. J Clin Oncol. 2005, 20: 6623-6630. 10.1200/JCO.2005.07.171.

    Article  Google Scholar 

  49. Groves RM, Singer E, Corning A: Leverage-salience theory of survey participation. Pub Op Quarterly. 2000, 64: 299-308. 10.1086/317990.

    Article  CAS  Google Scholar 

Pre-publication history

Download references

Acknowledgements

The authors are grateful to: Anca Romantan, Sandy Schwartz, Katrina Armstrong, Angela DeMichele, David Lee, Megan Kasimatis, Stacy Gray, Aaron Smith-McLallen, Annice Kim, Norman Wong, Susana Ramirez, Rebekah Nagler, Shawnika Hull & Chul-joo Lee for their help with instrument development, pre-testing, and logistical details.

Funding: This work was supported by the National Cancer Institute [grant number 5P50CA095856]

These data were supplied by the Bureau of Health Statistics and Research, Pennsylvania Department of Health, Harrisburg, Pennsylvania. The Pennsylvania Department of Health specifically disclaims responsibilities for any analyses, interpretations or conclusions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bridget J Kelly.

Additional information

Competing interests

Dr. Kelly works for Research Triangle Institute, a non-profit research institution that provides survey research services.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Kelly, B.J., Fraze, T.K. & Hornik, R.C. Response rates to a mailed survey of a representative sample of cancer patients randomly drawn from the Pennsylvania Cancer Registry: a randomized trial of incentive and length effects. BMC Med Res Methodol 10, 65 (2010). https://doi.org/10.1186/1471-2288-10-65

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1471-2288-10-65

Keywords