Everyone hates data transitions, but sometimes they are necessary. In most of the world, marketing research has undergone the transition to online from either telephone or face to face. When these transitions happen, we typically experience data differences, some of which can be measured, calibrated and explained while in other situations we are less able to explain the root cause.
Differences in population: One set of differences that are easy to measure are caused by differences in the populations being measured. Lower Internet penetration means an online sample is less representative of the population because not all members of the population are online. Regardless of country there are generally some key groups who are less likely to be online. This includes older adults, those less affluent, those in more rural areas, and those with less education. As Internet penetration increases the online population becomes more like the total population of the country. It is important to consider whether or not a particular target group can be reached with an online survey. If the group being reached online is different than the group previously reached with an offline methodology this could lead to differences in results.
Social desirability bias: Latin America, China, and several other Asian countries are now in the throes of transition to online. Researchers are busy trying to explain data differences and I had the pleasure of contributing to a scholarly work that shows that social desirability bias is a key explanation for differences between online and phone. With social desirability bias respondents are less likely to report negative things and more likely to report positive things. The respondent wants to look good, so for example they are less likely to claim they smoke and more likely to claim they exercise. The presence of an interviewer with phone increases social desirability bias.
This new research used data from the Advertising Research Foundation’s Foundations of Quality 2 project (ARF FoQ 2). In this study, 17 online sample providers supplied nonprobability based online samples and one sample provider supplied a probability based RDD phone sample. We developed a multivariate model that predicts the direction and magnitude of social desirability bias. By applying the social desirability correction factor phone and online results are much more inline. The model was also applied to data not used to develop the model and it also showed significantly less difference between the two modes. This research and analysis could suggest that online results are less biased especially for sensitive questions.
Lightspeed GMI has an abundance of experience moving from offline to online. This past experience can be utilized to ease the transition in Asian and Latin America.
Gittelman, S., Lange, V., Cook, W.A., Frede, S.M., Lavrakas, P.J., Pierce, C., & Thomas, R.K. (2015). Accounting for social-desirability bias in survey sampling. Journal of Advertising Research, 55(3), 242-254.