Methodological issues associated with collecting sensitive information over the telephone - experience from an Australian non-suicidal self-injury (NSSI) prevalence study
1 Population Research and Outcome Studies, School of Medicine, University of Adelaide, Adelaide (5000), Australia
2 Child and Adolescent Psychiatry, The University of Queensland, Queensland (4072), Australia
3 Centre for Suicide Prevention Studies, The University of Queensland, Queensland (4072), Australia
4 Sydney Medical School, The University of Sydney, Sydney (2006), Australia
5 Research Centre for Injury Studies, Flinders University, Adelaide (5042), Australia
BMC Medical Research Methodology 2011, 11:20 doi:10.1186/1471-2288-11-20Published: 17 February 2011
Collecting population data on sensitive issues such as non-suicidal self-injury (NSSI) is problematic. Case note audits or hospital/clinic based presentations only record severe cases and do not distinguish between suicidal and non-suicidal intent. Community surveys have largely been limited to school and university students, resulting in little much needed population-based data on NSSI. Collecting these data via a large scale population survey presents challenges to survey methodologists. This paper addresses the methodological issues associated with collecting this type of data via CATI.
An Australia-wide population survey was funded by the Australian Government to determine prevalence estimates of NSSI and associations, predictors, relationships to suicide attempts and suicide ideation, and outcomes. Computer assisted telephone interviewing (CATI) on a random sample of the Australian population aged 10+ years of age from randomly selected households, was undertaken.
Overall, from 31,216 eligible households, 12,006 interviews were undertaken (response rate 38.5%). The 4-week prevalence of NSSI was 1.1% (95% ci 0.9-1.3%) and lifetime prevalence was 8.1% (95% ci 7.6-8.6).
Methodological concerns and challenges in regard to collection of these data included extensive interviewer training and post interview counselling. Ethical considerations, especially with children as young as 10 years of age being asked sensitive questions, were addressed prior to data collection. The solution required a large amount of information to be sent to each selected household prior to the telephone interview which contributed to a lower than expected response rate. Non-coverage error caused by the population of interest being highly mobile, homeless or institutionalised was also a suspected issue in this low prevalence condition. In many circumstances the numbers missing from the sampling frame are small enough to not cause worry, especially when compared with the population as a whole, but within the population of interest to us, we believe that the most likely direction of bias is towards an underestimation of our prevalence estimates.
Collecting valid and reliable data is a paramount concern of health researchers and survey research methodologists. The challenge is to design cost-effective studies especially those associated with low-prevalence issues, and to balance time and convenience against validity, reliability, sampling, coverage, non-response and measurement error issues.