How to ensure quality unmoderated test data

Ensure the speed and efficiency of unmoderated user research is met with quality.

Unmoderated research is a very powerful tool. You get speed, efficiency and quick insights that can accelerate your team and move your product forward.

However, one of the challenges of unmoderated research is that when relying on people taking studies from remote locations, you remove some of your own quality control.

Therefore, there are a few steps that need to be taken when conducting online user research to ensure your speed and efficiency is met with quality.

Ensuring data quality in unmoderated tests

First, you need to work with a partner that manages the panel of testers with rigor. They’ll ensure bots aren’t included, that ‘professional’ testers are screened out, and they’ll continuously review the panel’s quality and engagement with different studies.

Once you have that partnership, ensure you make data cleaning part of the process.

While you work with a panel partner, some respondents may still not provide quality responses when answering questions. That is par for the course in online user research. Market research has been managing this balance for years.

In fact, the industry norm is to expect that 10-15% of participants from a study will need to be removed. Jeff Sauro, founder of MeasuringU and the SUPR-Q user experience measurement, says researchers should expect to remove around 10% of respondents. “In our experience, around 10% of responses (often ranging from 3% to 20%) ‘cheat’ in surveys and unmoderated UX studies, and need to be tossed out or ‘cleaned’.”

Everything you need to know about getting started with user research

So it’s important to make data cleaning part of your process for any unmoderated research you conduct. If you’re conducting a think-out-loud study with 10 participants, you’ll likely need to remove one person due to them not talking or poor audio quality. If you run a survey with 100 participants, you’ll likely need to remove about 10 participants due to nonsensical open-ended commentary.

In the market research industry, it is actually a best practice to include an open-end question in surveys, to be able to easily filter out low-quality respondents. Again, it’s all part of the process.

So when conducting a research plan, try to follow the steps below to ensure your final results are actionable, based on quality data:

  1. Design your study script
  2. Program the study
  3. Test through the study yourself and make updates
  4. Test the study with one to two colleagues and make updates
  5. Soft launch the study – This means take the study live with a few participants (3-5) to see if in-the-wild participants have any trouble completing the tasks based on confusing instructions, etc. Make any updates here and remove the original participants if need be.
  6. Full launch the study – Send it out to the number of participants you need for this study.
  7. Clean the data – You can do this at different intervals throughout the test, depending on your preference. You can clean the data when you have 25% of the completes you need, 50%, or once you have 100% of the completes. Identify any respondents that need to be removed, and exclude them from the study. Check out our 101 guide to data cleaning for more information.
  8. Replace those participants – Take the study live again and get new participants in. Continue to repeat steps 7-8 until you have 100% of the participants you need.
  9. Analyze the data!

One thing to keep in mind when removing respondents is asking yourself, “Is their answer really low quality, or does it make sense in the scenario?” For example, if they respond, “I don’t know” to a question, that answer may make sense and be sufficient. While you may not like the response, if you remove them you may bias the data. Also, try to look at their answers holistically, across different questions.

In a study I ran recently, someone said “The instructions are not clear,” in one open-end response, in another they provided a long, thoughtful answer, and in another they said, “I don’t know.” Therefore, looking at their responses together I decided to keep this participant in, as they were likely answering honestly for each question.

In another study I reviewed, there was a follow-up question to a click task asking, “Why did you click in that area?” I had to ask myself, how would I answer that question? I would probably say, “Because it makes sense” or, “It’s similar to other sites I’ve seen before”. So part of getting thoughtful open-end responses, is to write good and specific questions. Encourage participants to provide detail, and type one-two sentences.

So when you run your next unmoderated project, be sure to embed data cleaning into your process. This step will not only ensure you have high-quality data, but will also save you time in the long run and ensure you’re making decisions with confidence.