Catch More Bad Data with Advanced Approaches
For the past 20 years, the Internet has provided researchers with tools to quickly and efficiently obtain data for marketers. Customarily professional researchers, who have used a variety of tactics to spot and remove bad data, have taken on the monitoring of online data quality.
Now, with the advent of easy-to-use survey software, some brands are beginning to bring their research in-house. This means that they have to shoulder the responsibility of identifying and removing bad data and dishonest responses—a critical responsibility they too often overlook. This becomes even more critical as more and more dishonest respondents are becoming experts at cheating and avoiding detection.
Current Tactics for Identifying Bad Data
Fraudulent data is a widespread issue, resulting from respondents speeding through a survey, not paying attention to the questions, or becoming fatigued. The problem this presents is obvious: Management cannot make reliable decisions with unreliable data.
Current tactics employed by marketing research firms and in-house researchers alike include:
- Analyzing the median/mean time to complete the survey
- Verifying open-ended responses for nonsense answers
- Adding non-sequitur instruction questions (e.g., “To continue, select the following answer…”)
…among others. While these do help to weed out some bad responses, the rate at which fraudulent data is caught by a specific trap question is small, only about 1% to 3%. Research indicates that about 15% of respondents answer carelessly, and that this number increases with survey length. Furthermore, as the study specifications become more stringent, the proportion of bad responses also rises. Shockingly, using inadequate methodologies to catch bad data can result in as much as 20% of responses being completely random.
What’s Wrong With the Current Tactics?
Unfortunately, as online surveys become more ubiquitous, respondents bent on cheating have become more knowledgeable of the tricks-of-the-trade. They might take care not to speed through questions too quickly, write “none” for all verbatim responses, or quickly scan answers for special instructions. At any rate, it’s almost guaranteed that any survey will receive a multitude of false responses.
What about sample providers? Sample providers advertise their capability to filter out bad respondents and provide the most trustworthy panel possible. But even with the supposed advanced methodologies that survey panels employ, researchers still end up with fraudulent responses. This suggests that relying on sample providers is just not good enough.
Regrettably, traditional tactics take hours or even days to detect junk responses, so researchers need a solution with multiple layers that can dynamically flag bad data in real time.
So What’s the Solution?
Employing this methodology, surveys can be fielded quicker with more accurate resulting data. Using anything less could result in imbalanced results, especially if one relies on traditional methods—or no method at all.
As the availability of online survey platforms proliferates, so will respondents looking for an easy buck. It’s the responsibility of those executing the research to familiarize themselves with the common signs of bad responses, and to use the most effective methodologies to combat them in a timely manner. It’s imperative to adapt with the technology, and become acquainted with the tools necessary to dynamically—and quickly—catch data that could adversely impact a brand’s marketing strategy.
If you’d like to learn more about Brandware’s advanced quality checks, contact us here.
 Johnson, Jeff. “Improving online panel data usage in sales research.” Journal of Personal Selling & Sales Management 36.1 (2016): 74-85. Online.
 Garlick & Knapton. “Catch me if you can.” Quirk’s November 2007, page 58.
 Meade & Bartholomew. “Identifying careless responses in survey data.” Psychological Methods 17.3 (2012): 437-455. Online.