Today’s consumers already have a voice online; they publish opinions, build their own sites, post videos and share content. Technology is enabling an increase in mobile activity – allowing people to connect everywhere and at any time. Because of this, it stands to reason that that the way we’re interacting with respondents is shifting. Market researchers are now facing the challenge of being on the consumer’s terms and competing for time with them.
We’ve all been there. You spend endless hours perfecting a survey and imagining your ideal audience fallout. Might as well set some quotas to ensure that desired outcome, right? But now, as you take a step back, those quotas are starting to look a bit unrealistic and a little overwhelming. Take a deep breath, because I’m here to help provide solutions for some common pitfalls when it comes to setting quotas.
I recently had the opportunity to lecture at a class of students in the Masters of Market Research program at the University of Texas Arlington. Despite working for years in an industry where I live and breathe sampling every day, I looked back at my old textbook to see what it said about sampling. I noticed a scribbled note I had taken years ago: “No one thinks about sampling, until it goes wrong!!”
There is a lot going on in the world of market research. I have many clients who depend on us as their partner in fieldwork; we act as their hands and feet when ensuring that their fieldwork is executed with a desired quality standard, in budget and on time. In this rapidly changing environment we need to excel in all deliverables, and we depend on our people, technology and (sampling) sources to do so. The best package for clients is the preferred partner to work with in the market that has the ability to hit all the marks well. This means that with our client’s high frequency of projects, we need to excel on a daily basis. We are only as good as the delivery of our last (few) job(s).
The recent compulsory online census in Australia stirred a considerable amount of controversy, and for seemingly good reason. Ahead of census day, concern was raised by many Australian residents at being told they must share information, but when it was then virtually impossible to complete on the night (the handful of those who managed to submit it notwithstanding), many were up in arms.
Sampling often seems to be an afterthought with clients as many simply state they want a ‘nationally representative sample.’ The question is what does the client mean by a nationally representative sample? One client might think it means representation on age and gender only, while another might expect it to include controls on additional variables like region, income, education, etc.
A dozen years ago a debate raged in the marketing research community over the switch from probability sampling methods such as telephone RDD to nonprobability sampling methods as are typical with online access panels. In the interim years, most clients moved to online samples but there are still some that cling to probability methods. However, we now see the quality of probability samples being questioned because of low response rates for RDD. In an interesting twist, the very same techniques that nonprobability samples use to weight and model data now often need to be done on probability samples to account for nonresponse bias.
Research has consistently shown that all panels are not the same. Recruitment sources and management practices vary, and this can cause differences among panels. Beyond panels, there are other sources of online survey respondents, such as river, dynamic, and social media sources – and these can produce data that is different from each other, as well as different from panels. Given the wide variety of sample sources, and their benefits and drawbacks in cost and quality, researchers often struggle with the question, “How can I blend in other sources without impacting my data?”
The conversation around conducting surveys with online panels rather than face to face or CATI is one that comes up every day for me in South East Asia. Whilst this is an entirely valid discussion, online actually adds ways to ensure trustworthy and valid respondents. In this mini-series, we unlock the truths behind these myths.
Knowing who someone really is isn’t exclusive to our industry. If we look at the everyday world, people can look us in the eye and lie. We filter this, both consciously and subconsciously and take forward what we have confidence in being true. Now back to research. As a student, I had friends who would get calls from focus group recruiters offering them 5000 Rs ($80 USD). The next question would be 'do you like/ consume ‘x’ product?' They would agree to anything to get into the focus group and take the money home, including saying they drank Malibu regularly (I’m not sure anyone can say that hand on heart!) Whilst these recruiters are not a fair reflection of all in the industry, online uses pre-screeners that are far more subtle than this tactic (and far less heavily incentivised, granted) which encourage a level of honesty from the outset.
According to a recent GRIT Sample Report* presented at SampleCon in January, 81% of industry buyers and sellers believe traditional panels are dying. At the same time, over half believe traditional panels are the gold standard. Traditional research panels do bring advantages that yield data quality; perhaps most important, they are not associated with loyalty programs. Rather, panelists are sequestered solely for the purpose of taking surveys which minimizes bias. Other important benefits include validation of people’s identities upon registration as well as extensive profiling that can be used not only for targeting, but also to shorten surveys by appending data. Double opt-in panels also facilitate the integration of behavioral and attitudinal data sets.