I recently had the opportunity to lecture at a class of students in the Masters of Market Research program at the University of Texas Arlington. Despite working for years in an industry where I live and breathe sampling every day, I looked back at my old textbook to see what it said about sampling. I noticed a scribbled note I had taken years ago: “No one thinks about sampling, until it goes wrong!!”
There is real truth to this statement; however, when sampling is SUCH an important part of the research process, why does no one think about it? I would like to think it’s because we make things so easy. However, as I was speaking to students who are the future of the MR industry, I thought very carefully about what they needed to know about sampling. Here is what I came up with:
Not all sample is created equal.
Technology continues to advance, and methods of contacting consumers continues to evolve. Almost every website I go to asks for my opinion; I get at least an email a day asking for my opinion (not to mention as I leave the bathroom at DFW airport, there is an iPad mounted to the wall asking me to rate my experience). So, when filling your study, make sure you know exactly where your sample is coming from.
Faster isn’t always better.
We’ve all heard it: Time is money. At least once a day I am asked the question “how fast can we get out of field?” In an industry where everyone needs everything “yesterday”, sometimes getting data faster isn’t better. Like the Cheez-It commercial teaches us, some things need time to mature. Consumers who respond at different times will likely be from different demographics. Giving time in field will help ensure a representative sampling.
Panelists are people.
You’ll notice that throughout this post, I refer to panelists as consumers.
If you think of them in this way, you’re more likely to ensure they have a positive experience and therefore, they will more likely provide you with better insights in return.
KISS: Keep it super short.
I am not advocating for fewer than five questions, but the days of 40 minute surveys are gone. While we can field these, it really isn’t beneficial for the consumer or researcher. Consumers experience “survey fatigue” when surveys are too long and your data will be impacted. In fact, many consumers take surveys on their smartphones (that’s a topic for a whole other post), so make sure to keep it at a reasonable length for a positive experience. At Lightspeed, we suggest 15 minutes or less.
Give thought to census balancing.
On a daily basis we have clients asking us for census balancing, but what does that really mean and what are they really trying to achieve? We can balance on outgo, starts or completes and it will all be considered “census balanced.” I could write a whole blog post on this single topic, but here is the gist:
- Balanced Outgo/Sends: Sample is sent is to ensure that a census blend of invites go out to potential participants of the study. In my experience many times this is clients confused for balanced starts.
- Balanced Starts: Balancing is measured by the panelists that begin the survey by clicking on the link. This method is used to understand the target population and the main users of the product. Balancing for starts (or clicks) does not mean the completes will be balanced because the incidence for one demographic group may be higher or lower than another group. The main goal for balanced starts is to achieve exactly that, balanced starts. If you need balanced starts, there can be no end quotas for demographic groups.
- Balanced Completes: Sample is sent to achieve a census balanced end representation of the panelists that finish the survey. Some groups will end up being oversampled at the end to achieve the end quotas as required. This method also tends to assume the incidence will be consistent across demographics. If the incidence does not remain the same for all demographics, it will likely drop toward the end to reach the final completes in the quota groups.