Part 5 in our series, So Many Variables, So Little Time: A practical guide to what to worry about when conducting multi-country studies.
When it comes to conducting multi-country research studies, our research has shown that the way questions are posed to respondents can greatly influence the results of surveys. In fact, question design factors are the single most important means of improving the overall quality of data.
Accordingly, the topic of question design is vast and multi-faceted, and has been the focus of a great deal of our research on research. Over the past several years, we have published several papers about the many question design techniques and their impact on research results. Our most recent paper, Dimensions of Online Survey Data Quality: What Really Matters?, studies this factor through two large-scale multi-country survey experiments interviewing more than 11,000 respondents in 15 countries in a treatment versus control group approach. Our goal was to understand the impact of question design, as well as other factors, on the quality of survey data across countries. This work focused on the relative effects of different techniques on answers.
We looked at several aspects of data variance due to question design, including the use of imagery and iconography, the presentation format of questions, and respondents’ motivation to answer in response to question wording.
Using images to support choice selection in surveys has significant benefits, allowing researchers to communicate concepts more effectively than using words alone. However, it has an impact on data that is equally substantial. In our experiment it introduced a data variance of 34 percent, which compared to other forms of variance is very high.
Our paper presents examples that demonstrate how the choice of specific images impacted the results of specific surveys. The use of imagery improved recall scores on occasions by upwards of 50 percent. However, it must be noted that in some cases it weakened them due to the literal interpretation of the image. The bottom line is that using imagery in surveys without calibration can introduce data variance effects that are overwhelming, and therefore must be carefully implemented.
The use of icons, on the other hand, produced less than 2 percent data variance but had the ability to reduced speeding and speeding data variance as much as 20 percent. In addition, there was an increase in the consistency of responses across countries.
We have also explored the impact of asking questions in generally more interactive formats, such as dragging and dropping, flag sorting, star rating, gambling, list building and various forms of dynamic animated grids.
Only two formats in this experiment delivered measurable data variance. The first was the gambling methodology, where respondents were asked to bet imaginary money on various choices (Details of this technique can be found in The Game Experiments). Gambling encouraged respondents to be slightly more circumspect, and thus shifted the balance of data minimally. The second format that caused a significant difference was list building. This method requires further investigation, but the explanation seems to be that respondents tend to sort things into even groups which they may not naturally do when evaluating in a monadic fashion.
Generally speaking, we found that in every case these more creative questioning methods reduced straightlining and increased data granularity.
How questions are worded can greatly impact how they are interpreted. In our experiments we tested wording styles that asked respondents to imagine themselves in certain roles and situations, thus creating a personal connection from the respondent to the question. Our paper documents several examples, all of which demonstrate that this technique evokes a higher volume of feedback as well as higher levels of respondent engagement and satisfaction.
However, these examples also serve to illustrate that question wording is not only a critical part of the process of survey design but also a creative skill of the researcher. It is wholly dependent upon the subjects, goals and context of the research study, and thus, does not conform to a set of rules, standards or best practices. The most important thing is to be conscious of the impact of question wording on data outcomes, and to make frequent use of pilot tests.
This is part 5 in a series. If you would like to read more, visit:
- See more at: http://www.ls-gmi.com/data-quality/the-answer-lies-in-the-question/#sthash.tJ30yDQX.dpuf