Screening the ‘right’ participants is the most important part of conducting effective user research.
The right participants are able to provide you with the type of valid feedback that could assist with some meaningful improvements to your design.
So how can you effectively screen your test participants so you’re only getting the most valuable data?
A screener question is a tool for weeding out participants who are not qualified for your project. You should screen participants when you’re looking to gain valuable user insights for all types of qualitative and quantitative research, including moderated and unmoderated studies, contextual projects etc.
A screener consists of a series of questions, usually no more than five. It is important to ask the right questions in the correct sequence, to qualify the correct participants and produce good quality insights as the research is only as good as the users you can get to participate.
Imagine you work at a pet food company that offers food for dogs. To get feedback on a new product that’s made for small and medium-sized dogs, you would only want to survey small and medium dog owners. Eliminating large or extra large dog owners will give you the exact insights you are looking for and would increase your research credibility by tenfold.
Here are the general types of questions you may want to ask as part of a screener…
Also remember this thumb rule for writing a successful screener: Disqualify wrong participants as early in the survey, as possible, without making them answer irrelevant questions, only to screen them out at a later stage.
For an in-depth and entertaining guide to successfully recruiting the ‘right’ participants for user research, download our comprehensive new ebook:
Do not ask more than five questions, or participants will tend to get disengaged and drop out. Asking too many questions, particularly when targeting niche profiles, may lead to ‘panellist fatigue’, an industry term used to describe frustration experienced by the participant whilst participating in a project.
If the screener is expected to be complex, or longer than expected, it is advisable to offer a ‘short incentive’ to participants who complete it, but do not qualify for the main questionnaire. Health care studies in particular have almost 10 to 15 screening questions. It is very frustrating for panelists to get screened out at the ‘very last’ screener after spending the last five minutes going through the questions. A ‘short incentive’ of 50p to £1 could build trust among panelists and would encourage them to not give up on you.
Funnel the questions to go from a wider topic first, with the follow-up questions becoming more specific.
Here’s a general sequence for surveys with more than five screener questions:
1. Start easy — Warm up questions
2. Most difficult questions — Take time and need thinking
3. End with general questions — Easy to answer and of broad application
Avoid ‘one answer’ questions. Instead, use ‘multiple answer’ questions. With one answer questions, participants have a 50% chance of qualifying, which decreases the credibility of the project. Some participants will tend to lie, just so they can qualify for the project.
Leading questions could influence an individual to provide a particular response. Answers chosen from a list are more reliable.
Avoid sentences that could confuse participants. Avoid jargon, acronyms or abbreviations. Once you’ve sent out an online survey, you will not have an opportunity to clarify if the participant has understood the questions or not. Therefore it is imperative that participants understand the language used in the questionnaire.
Asking questions with answer choices such as often, rarely, sometimes, etc. is confusing and subjective. Some participants may define often as once a week, while others may define it as is everyday.
Provide the option for participants to select ‘Other’, ‘I don’t know’ or ‘None of the above’ as answer options, when a list is not all-encompassing. This will yield better data, by preventing participants from selecting inaccurate answers just to proceed to the next question.
Avoid answers that overlap or cause confusion. A common example we see, is overlapping age or income bracket answer choices.
Asking too many open ended questions leads to panellist fatigue, and drop-outs. Participants tend to type gibberish when they are forced to answer too many, or repetitive open-ended questions.
As well as this, the researcher is forced to read through hundreds of responses, to manually determine who qualifies or disqualifies from a project.
Finally, be sure to edit and revise your screener at least twice. Test your screener yourself by going through it and submitting real answers. Then, send it to colleagues and ask them to do the same. This process will help you target the data you need, understand what it feels like to answer the screener, and cut anything extraneous.
There are many moving parts to consider when putting together a UX study to ensure it’s successful. Writing a screener is one of the first, vital steps. It takes practice to make clear, efficient screeners, but it’s necessary so you can be sure your target users are represented in your studies.