“You need to allow smartphone respondents into your survey!" Those of us in the business of survey design and panel management have been telling researchers this for years now, but we need to show you and not just tell you why.
In our latest Q&A, Chris Stevens, Chief Research Officer at Lightspeed, reviews some of the sticking points around this.
Are we sounding like a broken record when we continue to tell clients to "go mobile with your research"?
I stood up on stage at a MRMW conference in 2015 with the warning that the industry will eventually struggle with surveys that do not allow smartphone devices, and the time was then to start planning for the future. Some researchers heeded this advice and are now successful in allowing all devices into their studies. But this is not the case for all researchers; as a result we see those researchers struggling today with survey achievement.
In 2018, the MRS in the UK invited Lightspeed and other major suppliers to work together on a new initiative reviewing global cross-supplier statistics, and this showed that many surveys still did not allow mobile devices (approximately 50%). Those that did allow all devices, the conversion from start to complete was less for smartphones as compared to PCs – mainly driven by incomplete rates. When we presented the feedback to an audience of researchers, I was surprised by how many questions from the floor were about – “How do you design a mobile friendly survey?”
From the data we see, and it is the same for other suppliers, the majority of new recruits / respondents want to interact with surveys on a smartphone device.
It can be concerning for researchers to make survey adjustments over time. Does constant not equal consistent?
It is key to understand that some level adaption is required to address surveys, particularly trackers and other normed designs, so that all devices are allowed in. Some of the feedback we get is that the surveys cannot be changed and need to be kept constant. For example, trackers specifically designed to not allow or maximise on smartphone respondents in order to keep the data consistent from prior waves.
Unfortunately, the opposite maybe true. Each year, we have more smartphone respondents who want to take surveys. This directly correlates to the technology consumer's today are using throughout their day, specifically the time some spend on their smartphones. Therefore, when restrict smartphone respondents, a large portion of the population is not represented in the data. In effect, the research population possible for a PC-only survey is actually changing and getting smaller and less representative of the intended population to be researched – it is not constant! This means that underlying trends might be changing in an unrealistic way.
It's not only demographics that are missed when excluding smartphone respondents. Research that Lightspeed has carried out shows that the missing profiles from data sets are behaviorally different. Those more likely to manage their lives on smartphone may not be unaware of key brand (although that could change in the future), we have found that they do depict different behaviors, like with their media usage. When measuring digital campaigns, leaving these respondents out of a survey or a tracker may lead to misleading information in the final results.
So, why has the change been slow? How can researchers begin to adapt?
Basically, because it's quite a challenge. I believe this is a harder switch than when we moved offline research to online research back in time, where many offline surveys were simply moved across with some modifications from interview to self-administered. The general length of the survey did not change much, as the survey population was mainly on PCs / laptops.
People find it hard to adapt and shorten their surveys because:
- They don’t want to lose data from questions that have been collected
- They don’t want to lose trends in the data
When avoiding the loss of data trends, those unwilling to adapt their surveys are forced to add more respondent sources into the mix to fulfill their quotas, but we know that different suppliers can provide different results. So with this fix, there is a risk to the consistency of the data if the mix changes over time. And as noted previously, each of the sources will be providing an unrepresentative portion of their supply.
We can empathise with the position that researchers are in, however it's time to start getting creative. For example, when fielding a monthly tracker, do the same questions need to be asked every month? Or can some questions be asked in a staggered way, with only the key monthly questions being asked each wave?
Also, consider data that can be appended to the survey results, eliminating the need to ask certain questions within the survey. Third party data sources, such as behavioral segmentation, can be impactful additions to a data set, building richer consumer profiles for target audiences.
What else can make the survey design future proof? These are the discussions that need to be happening now.