Five best practices for ensuring quality user research data

How do you make sure your data analysis techniques are filtering out misleading or invalid responses?

This is part two of a three-part series on UX research data quality. Part one is available here: how do you ensure the best quality participants for user research, and part three is here: five ways to ensure your research data is watertight.

You’ve just finished running an online questionnaire, so you grab a coffee and sit down to analyze the sample data. But as you look through, something immediately jumps out. You notice one respondent has straight-lined and clicked ‘option C’ for every single answer, while another has answered a nonsensical ‘yes’ to your well-thought-out, open-ended questions.

Personal frustrations aside, these kinds of responses can dramatically reduce the validity of your user research. They skew the findings and harm the quality of your data which, in turn, can make the whole project feel like a waste of time – not to mention a waste of money. 

Online surveys, though, shouldn’t be disregarded from user research. After all, they can be a great way to discover your customers’ priorities and needs, as well as explore new ideas and preferences. 

You just need to find out how to separate the honest, authentic participants from the people who are clicking any old answer to get the reward at the end of the survey.

As part of this, you’ll also want to filter out bots, de-duplicate individuals, and make sure to catch any fraudsters who are pretending to be someone they’re not. 

How to stop bad data before it starts 

Putting together a quality user research project takes time, effort, and experience. Your first step in whittling out the bad data is finding a partner who can manage the process with rigor, and provide you with the assurance that the quality of participants is as high as it can be.

They should, at the minimum, ensure that bots are excluded and ‘professional’ testers vetoed.

Even after this, though, you’re still at risk of users who’ll click ‘Option C’ for almost every answer. However, we need to remember that even the most honest participants might make a genuine mistake. 

No one makes a living from responding to surveys (and if they did, a whole new set of problems would arise). The people completing them may often be distracted. Whether the phone is ringing in the background or they’re multi-tasking while talking to their partner, it’s easy for anyone to respond in ways that should be discounted.

This is where things become tricky. If you’re trying to remove disengaged participants based on poor quality inputs, you may inadvertently also discount genuine participants whose errors were truly accidental – and whose other answers are valuable. 

Design is crucial  

If a survey is too long, laborious or confusing, then it’s more likely that your participants will disengage and click answers randomly. This is terrible news for your data quality. So, one of the most important things you can do is design your survey with the user experience in mind. 

Good survey principles include: 

  • Keep the survey short
  • Make questions easy to understand
  • Reduce the use of grids and scales 
  • Limit the number of answer options 

Five best practices for quality user research data

Once you’ve got your survey design in tip-top condition, it’s time to roll it out and analyze the results.

As we’ve mentioned, there’s a high possibility you’re going to get a few unusable responses in your sample. What matters is how good you are at finding them.

In our experience, there are a few best practices for cleaning up your user research data. 

1. Include two red herrings

Red herrings can be excellent quality control measures. They involve placing bizarre answers within your survey to help weed out those who aren’t really paying attention.

For example, say you’re a mobile phone manufacturer, you could ask participants what phone color they are most likely to buy. One of the answers you include could be 'window'. If anyone clicks this answer, it’s likely they are just racing through the survey to get to the prize at the end.

Another form of red herring is including the same question twice with slightly different phrasing. If the respondent’s answers vary widely for these questions, they may not be engaged. Saying this, as we’ve already mentioned, people are prone to distraction. One, genuine mistake or slip of the mouse could mean a do-good participant falls for a red-herring. This is why we recommend you include two. If a participant fails two red herring checks, then it’s likely they haven’t been paying attention and their answers can be discarded. 

2. Review answers to your open-ended questions

A quick and easy way to find despondent participants is to include one open-ended question in your survey. Generally speaking, we don’t recommend including more than two of these questions. This is because they take more time and thought to answer, so too many can easily lead to a participant losing interest.

So, one - or two - of these is the magic number. When you look through your answers, you’ll easily be able to identify poor responses from the quality of the answer. For example, if someone has typed “sldgfhalksghdf” or a single-word answer like “no”, then they can be removed from your sample. 

Participant recruitment 101

Everything you need to successfully recruit the ‘right’ participants for user research

3. Watch out for consistent errors and nonsense selections 

A lot of errors could indicate that the respondent is rushing and disengaged. It’s a good idea to set a limit on the number of errors you’ll allow a respondent to make before you exclude them.

As well as this, keep an eye out for nonsense selections, such as someone straight-lining or picking unrealistic answers.

As an example, put yourself in the shoes of a magazine publisher doing background research. If a respondent clicked that they subscribe to 100+ magazines per month, then there’s a strong chance they aren’t being genuine.  

4. Monitor completion time 

An extremely low completion time could indicate that a participant has sped through the survey without much thought. Looking at this figure can, therefore, be a good indicator of whether someone is genuine or not.

However, some people are also quicker than others. So, unless the time is completely, unrealistically low, we’d recommend delving deeper into the answers of quick participants. If their answers also seem nonsensical, then they can be discounted.

5. Look at the whole picture

As you analyze your answers, it’s important to look at the whole picture. Just because a respondent fell for one red herring or answered quickly, it doesn’t necessarily mean that their answers are low quality. Ask yourself: “does the answer make sense in the scenario?” 

As an example, let’s say someone answered “unsure” to an open-ended question. This doesn’t necessarily mean that they should be removed. It’s all about the context; look at their other answers. Do they seem reasonable? Is it possible the respondent genuinely wasn’t sure how to answer the open-ended question? If so, it’s likely they were answering authentically. 

So there you have it

Want to boost your data quality even further? UserZoom’s UX research solution ensures you can reach a high-quality audience in a fast scalable way. Get in touch to find out more. 

But we’re not done just yet. Come back soon to read the third and final part of our data quality articles.  

Get started with UserZoom today