According to a report released by the Boston Consulting Group, millennials will outnumber baby boomers 78 million to 56 million by 2030, and they are starting to form brand and shopping preferences that will likely stick with them for a lifetime. Marketers have to evolve and be much more interactive to attract and retain millennial consumers. Gone are the days of commercials and print media ads; millennials drive and demand a two-way, reciprocal marketing approach. Brands of all sizes try to connect with millennials to understand what drives their attitudes and behaviors, but unfortunately millennial voices are often underrepresented within typical marketing research forums.
In Debunking Weighting Misperceptions, our first post in the weighting data mini-series, we reviewed the benefits of weighting and debunked misconceptions. Now, we review how to appropriately weight and evaluate the weighting scheme.
With the presidential election in the United States in full swing there has been a lot of talk about the validity of political polls. This includes discussion on how to appropriately weight data. In this mini-series, we unlock the truths behind these weighting myths and misconceptions.
Sampling often seems to be an afterthought with clients as many simply state they want a ‘nationally representative sample.’ The question is what does the client mean by a nationally representative sample? One client might think it means representation on age and gender only, while another might expect it to include controls on additional variables like region, income, education, etc.
A dozen years ago a debate raged in the marketing research community over the switch from probability sampling methods such as telephone RDD to nonprobability sampling methods as are typical with online access panels. In the interim years, most clients moved to online samples but there are still some that cling to probability methods. However, we now see the quality of probability samples being questioned because of low response rates for RDD. In an interesting twist, the very same techniques that nonprobability samples use to weight and model data now often need to be done on probability samples to account for nonresponse bias.
Research has consistently shown that all panels are not the same. Recruitment sources and management practices vary, and this can cause differences among panels. Beyond panels, there are other sources of online survey respondents, such as river, dynamic, and social media sources – and these can produce data that is different from each other, as well as different from panels. Given the wide variety of sample sources, and their benefits and drawbacks in cost and quality, researchers often struggle with the question, “How can I blend in other sources without impacting my data?”
Many clients include quality checks in surveys to make sure respondents are engaged and are answering honestly. However, many of these checks identify false positives, which often mean valid, engaged respondents are thrown out of the sample. How can we reduce false positives?
Do you think like a respondent?
Poor quality survey design leads to low completion rates, high dropout rates, speeding, suspicious behavior, panel attrition and higher sample costs. Ultimately poor design can lead to bad business decisions. Mobile may finally force better survey design and better-written questions.
The Marketing Research Shared Interest Group (SIG) of the Cincinnati American Marketing Association meets monthly to discuss industry issues, growing trends, techniques and methodologies. During the February meeting, Brian Lamar from EMI Research Services led a great discussion across multiple industry topics. One common thread across all key points: clients.
Everyone hates data transitions, but sometimes they are necessary. In most of the world, marketing research has undergone the transition to online from either telephone or face to face. When these transitions happen, we typically experience data differences, some of which can be measured, calibrated and explained while in other situations we are less able to explain the root cause.