Part 4 in our series, So Many Variables, So Little Time: A practical guide to what to worry about when conducting multi-country studies
Almost everybody speeds. Regardless of the country we are fielding the survey in, speeding issues are probably the biggest general problem we face when conducting online research. In our recent paper, Dimensions of Online Survey Data Quality: What Really Matters?, we presented research that shows that 85 percent of the respondents sped through at least one question.
The paper details the results of two large-scale multi-country survey experiments interviewing more than 11,000 respondents in 15 countries that tested this factor, along with several others, in a treatment versus control group approach. Our goal was to understand the impact of speeders on the quality of survey data, and how to deal with this issue to accurately compare data across countries.
We found, as we expected, that speeding variance did increase as respondents progressed through the survey. It averaged 4% on the first few questions and rose to around 8% at the end, which ranked this factor as the highest source of overall variance. We found also that it is the younger age groups and men who tend to answer survey questions faster.
National Differences in Speeding
At first glance, it appeared that some nationalities speed through surveys a lot more than others. But when relative reading and comprehension times in different countries are taken into account as part of the significant differences in average completion time, it becomes clear that thinking times are remarkably similar country to country.
Across all countries, we saw a rapid decay in thinking time given to questions presented in repetition. The first time the question was asked, it received 7 seconds of thought on average. The second time, it got 5, and for the third and subsequent instances, the question got an average of 2 seconds of thought. For simple yes-no questions like “are you aware of this brand” thinking time averaged about 1 second, but if presented in a multi-choice list, thinking time dropped to less than 1 second.
Comparing answers from the slowest and fastest halves of the sample, it was evident that there were significant differences in answers for binary (30% variance) and multiple choice (40 % variance) questions. On likert scale questions, speeding tended to bias data toward the positive, and was more pronounced in questions where there was natural disagreement. The root cause of speed-related data variance is declining thinking time.
Our experiments have shown that speeding has significant impact on data across all countries, and that, because everybody speeds, simply removing all speeders from the sample is not an option. So, what can researchers do, aside from removing the most egregious speeders? Our hypothesis was that the answer is rooted in how we ask the questions.
To test this hypothesis, we asked respondents two questions that were identical, except that one was phrased in a personal context and one in an impersonal context. Respondents answering the question phrased as, “Thinking about yourself, how much do you feel the following words describe you?” invested 70% more time than those who answered the question phrased as, “How much do the following words describe this brand?”
Applying the concept to likert scale questions, respondents spent nearly 50% more time answering the question phrased in a personal context.
The phrasing lesson is clear for all types of questions: the more you can contextualize a question in the mind-set of the respondent the more effort respondents will put into thinking about their answers. Taking a more creative approach to survey design is our most effective, indeed perhaps our only, weapon to battle speeding.
This is part 4 in a series. If you would like to read more, visit:
- See more at: http://www.ls-gmi.com/data-quality/getting-nowhere-fast-the-impact-of-speeders-in-multi-country-studies/#sthash.L2kO165K.dpuf