Kantar's Profiles Blog

Sampling Best Practice:  Probability versus Nonprobability, Redux

Posted by Susan Frede on May 31, 2016

A dozen years ago a debate raged in the marketing research community over the switch from probability sampling methods such as telephone RDD to nonprobability sampling methods as are typical with online access panels. In the interim years, most clients moved to online samples but there are still some that cling to probability methods. However, we now see the quality of probability samples being questioned because of low response rates for RDD. In an interesting twist, the very same techniques that nonprobability samples use to weight and model data now often need to be done on probability samples to account for nonresponse bias. 

In early May Pew Research Center published a report entitled “Evaluating Online Nonprobability Survey.”  Pew fielded a survey to nine online nonprobability samples from eight different suppliers. The survey included 56 measures including 20 benchmarks with comparison data obtained from high quality government research. Results were also compared to Pew’s American Trends Panel (ATP) which was recruited via random digit dialing (RDD) in early 2014. 

Many of Pew’s findings coincide with what Lightspeed GMI recommends as best in class sampling. Our basic premise is to pull an appropriate sample using the right variables for a particular client. Sampling should be customized for each client based on their expressed objectives and needs. We were one of the suppliers in the Pew work and generally performed in the middle or middle-upper part of the pack. In this case, we were given sampling instructions to control on only age and gender. In hindsight, we should have insisted on additional controls to improve sample quality and performance. Applying our best practices would have moved us to the top of the pack. 

The Lightspeed GMI Sampling Best Practices include:

  1. Understand the objectives and analytic plan for the research and who the end client is so an appropriate sample is built.
  2. Regardless of objectives/analytic plans, at a minimum balance the outgoing sample on age, gender, region, and income/social class.  Additional variables may be needed based on the objectives and analytic plan. 
  3. Be cautious with too much interlocking of quotas.  The basic recommendation is to only interlock age and gender.  When more is interlocked it creates a level of complexity that is very hard to deliver against. 
  4. Instead of interlocking, if a subgroup is particularly important (e.g., Hispanics, African Americans, young adults, males, etc.) consider splitting the sampling by those subgroups to make sure the subgroup is truly representative.  For example, instead of building a single sample two samples might be built—one for males and another for females.
  5. Set survey quotas for key variables.  This should generally be age and gender plus one other key variable.
  6. Be cautious with short field periods.  At a minimum we would recommend a two to three day field period and for more academic research might go to at least seven days. 
  7. Don’t be afraid to weight data.  RIM weighting is the recommended approach.  Before weighting on non-demographic characteristics carefully think about the implications. 
  8. Remember sample sources aren’t the same so devise a sampling plan that allows the sample blend to remain fairly consistent for the life of the project. 

Topics: Online Sampling, marketing research best practices, nonprobabliity samples, probability samples

Subscribe to Email Updates

Recent Posts

Posts by Topic

see all