Innovate Podcast Series #4 - Improving Quality through Survey Design

By Lisa Wilding-Brown

One of the fundamental challenges of good survey design is accounting for all of the different factors – data quality, fraud protection, mobile compatibility, and general user experience. In a recent episode of the Innovate Podcast, Mark Menig, CEO of True Sample, and I took a closer look at these factors.

In this article, I’m going to summarize some of the more salient points we discussed. You can also listen to the full podcast episode here:

Episode 4 (no overlay) (1).png

A lot of people in the industry want all the responsibility of fraud detection and data quality to sit with one entity within the survey supply chain, right? Unfortunately, it’s not possibly to do that, as it needs to be a collaborative effort involving all parties operating in the research ecosystem. The brand commissioning the research, the research firm, the sample provider, sample provider partners, the survey hosting and programming tool, and the survey itself – they all have a role.

This really varies from client to client, but there is a need for more education, more communication, and just general knowledge about what’s out there, and how the survey author can really help in addressing these issues.

Keeping a Survey Short and Simple

One of the most important things to do when authoring a survey is to keep it short and simple. Use clear and concise words, written to a fifth-grade level, all to mitigate the risk that someone might not fully understand. Text-heavy surveys are overwhelming and put a greater burden on the respondent to understand and properly complete what you’ve presented. Keeping the survey short – less than 20 minutes (10 if you can) – and paring down language to be accessible, is very important.

Implementing a Honey Pot in Your Survey

There’s something called a “honey pot” that can help with survey design as well. This is a question designed by the programmer to determine if the respondent is a real human. It’s most often hidden so humans won’t see and cannot respond to it, while bots that are unable to differentiate between hidden or visible, will fill it in and identify themselves.

Another useful strategy here to identify bots is to look at survey completion time. This isn’t 100% foolproof as technology has developed in recent years, but early on, bots would take a minute or two to complete a survey that humans would average 20+ minutes on. Look for these signals in your data to identify possible bot activity.

Implementing a Red Herring Question

A Red Herring is an old school strategy, but it can really work well if implemented properly today. These trap questions allow you to capture bots or humans that are bypassing the basic rules of the survey. There are several ways to implement them:

  • Instructional Red Herrings – These often guide respondents to choose an item that matches a certain word, such as the house on the beach or the book, usually corresponding to multiple images.
  • Skill Based Questions – These are simple questions like 2+2, but they need to be unique and strong enough to stop fraudsters and bots. They tend not to work because of their ease.
  • Honesty Based Questions – A great example of this is asking for what brands someone might be aware of, and then including several brands that are not real. Fraudsters tend to over-select, trying to do whatever you want them to do so they qualify.
  • Knowledge Based Questions – When attempting to identify an expert or when knowledge matters for a given topic, these can help to wean out not only people who are being dishonest as in the last case, but those who are not legitimate experts.

Good red herring questions can whittle down the field, identify potential fraudsters and improve survey quality overall.

Building Surveys for Mobile Use

An increasing number of brands are looking for insights from young audiences that spent the majority of their time using a mobile device. The growth of mobile means it’s more important than ever for surveys to adapt. The smaller format, nature of mobile-use, and the difference in limitations mean we need to take a different approach to traditional online surveys.

Mobile can be effective in activating these younger audiences, but at the same time, these surveys need to be designed to work within the context of a mobile device. Just because you can get a grid with 60 different options to display on a mobile device doesn’t mean it’s an enjoyable experience for the respondent.

These are all issues that should be carefully considered and implemented for your survey design. To hear the full conversation between Mark and myself, you can download the podcast from the link below or stream it directly from your device using Apple Podcasts or Google Play.