Many clients include quality checks in surveys to make sure respondents are engaged and are answering honestly. However, many of these checks identify false positives, which often mean valid, engaged respondents are thrown out of the sample. How can we reduce false positives?
The Marketing Research Shared Interest Group (SIG) of the Cincinnati American Marketing Association meets monthly to discuss industry issues, growing trends, techniques and methodologies. During the February meeting, Brian Lamar from EMI Research Services led a great discussion across multiple industry topics. One common thread across all key points: clients.
Research clients should learn how the source of their respondents affects research data. At Lightspeed GMI, we have been observing some very significant differences depending on the source of the data. The way a respondent enters a study will impact the overall data quality obtained from that respondent. It is not a story of good and bad, but rather just a case of significant differences; ignoring these differences could result in poor quality research.
As online research has transitioned over the past 15 years from random digit dialing or door-to-door methods to online, a shift from probability to non-probability sampling has occurred. To assuage the concerns of researchers, rigorous sampling methods were developed to ensure representative samples. Despite this, however, there has been a rapid change in sampling practices as respondent efficiency became the focus of fieldwork companies and respondent routing replaced traditional sampling for much of the market research industry.