Speeders, Cheaters, Bots and Repeaters pose a huge problem to UX and CX research. Responses from these participants can wreck your data by artificially influencing success rates, task times, and overall ratings.

Although the industry norm is to expect around 10-15% of these participants in your study, researchers are seeing a greater proportion of no-quality ‘cheating’ participants or bots hitting their studies due to the current global environment and increased demand for remote unmoderated user research.

Not only do they sully your data, but also increase the amount of time spent on data cleaning and re-fielding.

What can we do about Speeders, Cheaters, Bots and Repeaters?

The two best things you can do to ‘beat the cheat’ are:

  1. Design your study with these responders in mind
  2. Clean your data with these responders in mind

In this post, we will go over these different types of problem participants. We’ll discuss why they’re a problem, what to do about them and we’ll offer some tips for removing their bad data from your data set.

Let’s now take a look at Speeders, Cheaters, Bots and Repeaters…

Speeders

Why are they a problem?

Speeders are participants who speed through your study making little to no effort when completing tasks or answering survey questions.

How does this impact your study?

These participants often don’t follow task instructions, provide nonsense responses, and give straight line ratings (e.g., 5s on all rating scales). Data from these participants can artificially influence success rates, task times, and inflate your ratings.

What can you do about it?

Design your study with Speeders in mind. Here’s some tips for settings and question types that help identify Speeders…

Speeder settings: Set up automatic exclusion criteria for Speeders (either by setting a required number of clicks or required amount of time spent completing a task). Participants who don’t meet the requirement are flagged and removed from your dataset. In the example below, you can see my study is set to exclude anyone who did not spend at least 30 seconds in 3 out of 4 tasks or made at least 1 click in 3 out of 4 tasks.

Keep in mind, you must select the thresholds based on how you think participants will behave during your task. Be sure to review those who have been eliminated based on your settings to ensure your thresholds are not overly stringent.

Display questions one at a time: Presenting your questions all on a single page can enable Speeders to zip through your test without paying proper attention to your questions. Displaying one question at a time will force the Speeder to slow down. This is especially the case when you have a conditioned follow-up question. If a speeder sees that a rating of 3 will prompt a follow-up, they may be inclined to increase the rating so as to avoid the follow-up.

  • Best Practice: If a test has more than 5 questions, do not display all of the questions on the same screen.

Attention questions: These ensure that participants are reading and attending to the prompt in the question. Sometimes this could be in the form of a trick question or a question that requires an ‘action.’ In the example below, we used a block of text with instructions on how to answer the question – “For this question, please select 3 as the answer.”

As you can see, 22 participants clearly did not read the full instructions and chose an incorrect answer. These 22 participants would then be excluded.

Reverse order questions: Ask the same question twice but in reverse to check for consistency. For example: “I enjoyed using this website” and “I did not enjoy using this website.” If a respondent agrees or disagrees to both questions then they are probably not taking the study seriously.

Buy-in questions (social norming): Buy-in questions emphasize the importance of the study and why it’s important to be a good participant. Often, buy-in questions also include a statement about the possible risks of speeding through the study & not reading each question carefully.

While this won’t stop or exclude Speeders from your data set, this can help persuade some to give your study the proper attention and effort.

  • Example buy-in question: “We check responses carefully in order to make sure that people have read the instructions for the task and responded carefully. We try to only use data from participants who clearly demonstrate that they have read & understood the study.  Throughout the study, we may ask questions that test whether you are reading the instructions.  If you get these wrong, we may not be able to use your data.  Do you understand?”
    • Yes
    • No 

How to identify and remove Speeders from your data set

Speeders zoom through your study with little to no effort. When data cleaning, you will want to keep an eye out of Speeders in the following ways:

  • Look for short task times that are unrealistic
  • Look for video recordings that show bizarre, illogical navigation
  • Looking for straight line ratings (especially if you included reverse order ratings)
  • For more data cleaning tips see: 101 Guide to Data Cleaning

Cheaters

Why are they a problem?

They’re not your true user. They have figured out a way to answer your screener questions and artificially qualify for your study.

How does this impact your study?

If your study is intended for a specific audience, and participants have found a way to cheat through your screener, the feedback you receive will be meaningless. It can become apparent when reviewing open-ended responses and see responses with little or no text or meaningless feedback.

Side note: If you can get away with testing the general population, try not to limit your study to a needle in a haystack unless your study calls for it.

What can you do about it?

Design your study with Cheaters in mind. Here’s some tips for settings and question types that help identify Cheaters…

Limit the number of multiple responses per device: By default, your study should be set to allow multiple responses. Turning this option off will only allow one response per device. In UserZoom you can find this option under: task builder → settings → security & controls

Randomize response options: Consider randomizing your response options, particularly for screener questions with termination logic or for validation questions. This will make it a little more challenging for a cheater to assume the ‘correct’ answer. (e.g., “all Bs”) to get through the screener or correctly answer a validation question.

Open-ends: Open ended questions are great in catching any participants who may not be putting their whole attention or effort into a study or just not have the experience to give real answers. Ensuring that the open-ended responses are relevant to your user group & comprehensive is a great way to catch cheaters, speeders & bots.

Knowledge questions: Similar to the above approach, this takes it one step further by testing whether they truly are your user if they can correctly answer a knowledge question your intended audience is expected to know. For example, in addition to asking for their job roles (looking for coders and developers), they were asked a technical programming question.

Consistency check: Ask the same question twice, spaced far apart. For example, ask them to type their age and in another question ask to select their age bracket. The first answer should be within the selected age bracket.

Multiple choice: Cheaters will often choose all options when asked which of the following apply to them to increase their chances of being qualified for a study. Try rephrasing your question.

For example, if you want to ask “Which of these websites have you visited in the last month?” you can change it to: “Which of these websites have you NOT visited in the last month?” Doing so, will make it harder for them to guess how they should answer the qualifier question.

How to identify and remove Cheaters from your data set

Cheaters have figured out how to answer your screener questions and qualify for your study, however, they’ll speed through your study with little to no effort. When data cleaning, you will want to keep an eye out for Cheaters in the following ways:

  • Look for straight-line responses (e.g., 5s on all rating scales)
  • Look for nonsense or meaningless responses to open-ended questions
  • Look for video recordings that show learned behavior, especially during difficult tasks
  • For more data cleaning tips see: 101 Guide to Data Cleaning

Bots and Repeaters

Why are they a problem?

If you’re receiving similar data or responses, it’s likely to be a bot or someone taking the study many times with the goal of receiving the greatest incentive reward possible. Fortunately, many platforms have advanced settings where you can allow one response per IP address.

It’s recommended to look at the raw data output and look for patterns in the dataset which may include same answer choices across questions, identical open-text responses, nonsense selections, or same rating across questions across multiple participants.

How does this impact your study?

This will invariably skew your data and true insights will be lost in the clutter.

What can you do about it?

The good news is that there are many ways to catch a bot or repeater when designing your remote study.

Limit the number of multiple responses per device: By default, your study should be set to allow multiple responses. Turning this option off will only allow one response per device. You can find this option in UserZoom under: task builder → settings → security & controls

Open-ends: Open-ended questions are great for identifying repetitive, identical answers that have been programmed for the bot to use or a repeater has decided to copy and paste over. Real humans are unique and the chances of two people giving the same, exact text response to an open-ended question is nearly impossible. Chances are it’s from the same person or from a bot.

Randomization: Randomize the order of your qualifying questions and the response options. This will make it harder to program a bot or for repeaters to learn the conditions and logic to get through the study again.

Multiple choice: Bots and Repeaters will often choose all options when asked which of the following apply to them to increase their chances of being qualified for a study. Try rephrasing your question.

For example, if you want to ask “Which of these websites have you visited in the last month?”, you can change it to “Which of these websites have you NOT visited in the last month?” Doing so, will make it harder for them to guess how they should answer the qualifier question.

How to identify and remove Bots or Repeaters from your data set

When data cleaning, you will want to keep an eye out of Bots and Repeaters in the following ways:

  • Look for straight-line responses (e.g., 5s on all rating scales)
  • Look for patterns in the dataset such as identical open-text responses, nonsense selections, or the same ratings across questions by multiple “participants”
  • For more data cleaning tips see: 101 Guide to Data Cleaning

In conclusion…

Congratulations! You’re now ready to ‘Beat the Cheat’ with your study design.

As demand for remote research continues to increase, the ability to run efficient, remote unmoderated research with high quality data and low turnaround times will be critical to serving our teams and understanding our end-users.

Regardless of whether you are a UX/CX professional, marketer or product management professional, remember to keep these strategies in mind to combat data quality issues when designing your next remote unmoderated study.


Learn when moderated or unmoderated testing is the right choice for you!

We’ve compiled a variety of real-life examples from leading UX practitioners across the globe, all of whom offer their own use-cases on when best to use moderated and unmoderated research.

The mini-ebook also includes

  • A concise introduction to both moderated and unmoderated testing
  • Key issues to consider when undertaking undertaking usability testing
  • A very handy cheat-sheet to the pros and cons of both methods

Download below, it’s completely free and you don’t even have to fill-in a boring form to get it…

Everything in moderation


Main image by Franck V