In a recent blog post I presented a research use and technique framework. Today’s blog focuses on two of the applied research uses—strategic marketing management and marketing performance. Strategic marketing management involves getting the big picture by understanding opportunities, problems, and potential targets. Marketing performance is focused on assessing performance by monitoring and analyzing what is happening in market.
Topics: Marketing Research
Teaching marketing research has given me the opportunity to connect with people who could be future leaders in the marketing research industry, which I find to be an exciting extension of my ‘day job’ heading-up research methods and best practices for Lightspeed. I am currently teaching Consumer Insights at Northern Kentucky University. Teaching undergrads marketing research has made me reevaluate how we in the industry talk about various topics and try to come up with simple ways to explain what we do. One of my first challenges was coming up with a framework that summarizes the uses of marketing research and the specific research techniques tied to each use. I was thinking this should be simple; however, I quickly realized I couldn’t find what I wanted, so I created my own framework.
According to a report released by the Boston Consulting Group, millennials will outnumber baby boomers 78 million to 56 million by 2030, and they are starting to form brand and shopping preferences that will likely stick with them for a lifetime. Marketers have to evolve and be much more interactive to attract and retain millennial consumers. Gone are the days of commercials and print media ads; millennials drive and demand a two-way, reciprocal marketing approach. Brands of all sizes try to connect with millennials to understand what drives their attitudes and behaviors, but unfortunately millennial voices are often underrepresented within typical marketing research forums.
In Debunking Weighting Misperceptions, our first post in the weighting data mini-series, we reviewed the benefits of weighting and debunked misconceptions. Now, we review how to appropriately weight and evaluate the weighting scheme.
With the presidential election in the United States in full swing there has been a lot of talk about the validity of political polls. This includes discussion on how to appropriately weight data. In this mini-series, we unlock the truths behind these weighting myths and misconceptions.
Sampling often seems to be an afterthought with clients as many simply state they want a ‘nationally representative sample.’ The question is what does the client mean by a nationally representative sample? One client might think it means representation on age and gender only, while another might expect it to include controls on additional variables like region, income, education, etc.
A dozen years ago a debate raged in the marketing research community over the switch from probability sampling methods such as telephone RDD to nonprobability sampling methods as are typical with online access panels. In the interim years, most clients moved to online samples but there are still some that cling to probability methods. However, we now see the quality of probability samples being questioned because of low response rates for RDD. In an interesting twist, the very same techniques that nonprobability samples use to weight and model data now often need to be done on probability samples to account for nonresponse bias.
Research has consistently shown that all panels are not the same. Recruitment sources and management practices vary, and this can cause differences among panels. Beyond panels, there are other sources of online survey respondents, such as river, dynamic, and social media sources – and these can produce data that is different from each other, as well as different from panels. Given the wide variety of sample sources, and their benefits and drawbacks in cost and quality, researchers often struggle with the question, “How can I blend in other sources without impacting my data?”
Many clients include quality checks in surveys to make sure respondents are engaged and are answering honestly. However, many of these checks identify false positives, which often mean valid, engaged respondents are thrown out of the sample. How can we reduce false positives?
Do you think like a respondent?
Poor quality survey design leads to low completion rates, high dropout rates, speeding, suspicious behavior, panel attrition and higher sample costs. Ultimately poor design can lead to bad business decisions. Mobile may finally force better survey design and better-written questions.