The use of research questionnaires with hearing impaired adults: online vs. paper-and-pencil administration
1 Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
2 Department of Clinical and Experimental Medicine, Division of Technical Audiology, Linköping University, Linköping, Sweden
3 The Swedish Institute for Disability Research, Örebro and Linköping University, Linköping, Sweden
4 Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
5 Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden
BMC Ear, Nose and Throat Disorders 2012, 12:12 doi:10.1186/1472-6815-12-12Published: 29 October 2012
When evaluating hearing rehabilitation, it is reasonable to use self-report questionnaires as outcome measure. Questionnaires used in audiological research are developed and validated for the paper-and-pencil format. As computer and Internet use is increasing, standardized questionnaires used in the audiological context should be evaluated to determine the viability of the online administration format.
The aim of this study was to compare administration of questionnaires online versus paper- and pencil of four standardised questionnaires used in hearing research and clinic. We included the Hearing Handicap Inventory for the Elderly (HHIE), the International Outcome Inventory for Hearing Aids (IOI-HA), Satisfaction with Amplification in Daily Life (SADL), and the Hospital Anxiety and Depression Scale (HADS).
A cross-over design was used by randomly letting the participants complete the questionnaires either online or on paper. After 3 weeks the participants filled out the same questionnaires again but in the other format. A total of 65 hearing-aid users were recruited from a hearing clinic to participate on a voluntary basis and of these 53 completed both versions of the questionnaires.
A significant main effect of format was found on the HHIE (p < 0.001), with participants reporting higher scores on the online format than in the paper format. There was no interaction effect. For the other questionnaires were no significant main or interaction effects of format. Significant correlations between the two ways of presenting the measures was found for all questionnaires (p<0.05). The results from reliability tests showed Cronbachs α’s above .70 for all four questionnaires and differences in Cronbachs α between administration formats were negligible.
For three of the four included questionnaires the participants’ scores remained consistent across administrations and formats. For the fourth included questionnaire (HHIE) a significant difference of format with a small effect size was found. The relevance of the difference in scores between the formats depends on which context the questionnaire is used in. On balance, it is recommended that the administration format remain stable across assessment points.