What are screeners and why are they vital in research?

It’s all about saving time, money and ensuring your test results are valid and genuinely insightful.

So you’re about to run a usability test on some function of your website, app or digital product. Great! But how can you guarantee that you’re inviting the *right* people to participate in your study?

It doesn’t matter what test you’re running, whether it’s a card sort, an online survey, or a remote usability test. Nor does it matter if it’s moderated (the researcher is present) or unmoderated (the participant flies solo) – if the people you recruit for the test aren’t the sort of people who will use your product in the first place, then their insight can be a pointless waste of time at best, or at worst – steer your product in completely the wrong direction.

That’s not to say that you should only recruit your actual customers for a study – after all, you’ll want to grow the number of new customers using your product, however there are people you’ll want to filter in or out, depending on what you’re trying to achieve.

Ready to begin? Let’s start with the basics…

With huge thanks to Chris Spooner (our Resourcing Project Manager), Clare Burroughs (our Customer Marketing Manager), Becca Kennedy (Human Factors Psychologist and author of excellent UX articles) and Andrea Peer (our VP Product Strategy of Research) for their invaluable input.

What is a screener question?

Before running any kind of user testing platform, you’ll want to find people to participate. People who will carry out the tasks you sent them.

A screener (or screener question) is an opportunity for you to have bit more control over who carries out your test before they begin.

This will also help you filter out anybody who wouldn’t necessarily be right for it.  

For instance, if you’re testing the navigation of a website that has a sole audience of civil engineers, and you want to make sure all the relevant categories are represented and sit in the right menu, there’s little point in recruiting a load of people who literally just this second had to Google the term ‘civil engineers’.

Therefore you’ll want to find out what level of knowledge a participant has around civil engineering terms.

What does a screener look like?

You’ll likely have two types of question available:

  •      Multiple choice (user chooses one correct answer)
  •      Checkboxes (user chooses all options that apply)

Be careful when selecting the type of screener question you want to use and think about how people might want to answer it. If the participant is likely to give more than one answer, then give them that option.

Here’s an example by Clare Burroughs of a screener question for an online car dealership:

Which of the following statements apply to you?

  •      I bought a car online over a year ago. (Screenout)
  •      I bought a car online within the last year. (Next)
  •      I’ve not bought a car but am planning to in the next 6 months. (Next)
  •      I never have and never will buy a car. (Screenout)
  •      I prefer not to say. (Screenout)

This is created to not lead the user towards one answer over another.

The ‘Next’ or ‘Screenout’ messages are for internal use only and show whether a participant would be selected to take the test or not. Next means they will proceed, Screenout means they won’t be suitable for this particular study.

An example of a Screener question from UserZoom

Avoiding Yes/No Questions

Try to ask ‘open questions’, as yes/no questions are commonly seen to be too leading. Chris Spooner, says, “The risk of this is that participants may answer falsely by what they assume to be the required answer, in order to qualify for the study. So, for example, if you are looking to recruit ‘Ford Drivers’, then rather than asking – ‘Are you a Ford driver?’ it would be more effective to ask which brand of car they drive and provide a list of car brands to choose from.“

Can personas or demographics help find the right participants?

Before setting these questions, you’ll want to actually ascertain who exactly your ideal test participant is. And if your team or organization has already defined your business’s key personas… awesome, the hard work has already been done!

Personas are a way to help organisations understand their potential and existing audience in a more personal way. In essence, personas are detailed profiles of a particular audience member, who represents a distinct group of people – in that they share similar behaviour, attitudes, personalities and preferences of your product, but are the ‘figurehead’ for a larger demographic.

You could therefore tailor your questions to whittle down potential participants to the most valuable people.

However as Chris Spooner states, it may benefit you to cast your net wider:

“In my opinion it is important to know why you need certain participants for the study. If you believe you can still obtain useful data from a more general specification of users then it is advised to do so, as it will work out cheaper and speed up field time. 

I have experienced a few cases where we are given detailed personas and we have to narrow it down to identify a common theme between them. This can be a tricky process, and whilst I recognise their importance in UX research; when considering screener questions, it is helpful to centre on common attributes such as ‘works in finance’ or ‘has children under 18’, based on what is most important for your research.

Demographics can be helpful, and they are simple to set up as questions or pre-target on. When deciding to use them or not it is worthwhile considering how important other criteria is, as they naturally limit the sample available to you.”

Tips for writing a screener question

With thanks to Chris Spooner and Clare Burroughs for the following insight…

  • Be direct and clear, sometimes it is easier to give the user statements that they may identify with instead of asking them a question.
  • Remember that you’re dealing with people. You may have a very corporate tone in your business but this can be alienating and confusing to your participants. Write questions using the language of the user not the business.
  • Don’t make it easy. People naturally want to help so they will try and find the answer they think you want. That’s why you should make sure it’s not obvious which answer you’re looking for.
  • When setting up questions, try to envisage how a participant would approach the question and aim to give the target audience the best chance of entering the study. If you were looking for a ‘small business owner’, some participants may identify themselves as ‘self-employed’ rather than a ‘business owner’, especially if they were just starting up. So, it would be worthwhile to let these people in and not wrongly screen them out, therefore limiting your sample.
  • People may attribute themselves to more than one answer. For example, if you were looking to identify those with a certain medical disorder, people may have more than one, so you have to account for this in the question.
  • Avoid leading questions. As Becca Kennedy states in her blog on avoiding leading questions, “Your job as a UX researcher is to uncover truth and honesty. Your job is to gather user feedback that isn’t coloured by your own hopes or expectations. Your job is to listen, and to be deliberate with your words and actions.”
  • If doing competitor analysis, the question: “Do you work for a competitor company?” is pointless as a participant probably knows the correct answer and may well lie to get selected. UserFocus suggests avoiding this by simply asking an open question: “Where do you work?” or “Tell us about your job?”
  • Avoid beginning a test using a difficult screener, with the intention of simplifying it if struggling in field. On paper this makes sense and allows you to identify an incidence rate (percentage of those eligible to take part in the study) while you monitor progress. However, the danger associated with this is that you generally can not invite participants back who originally were screened out. So, in this instance, once you have relaxed the criteria of your screener, you are now at a disadvantage as you have access to fewer participants to take part in the study.

Very important final takeaway

If there’s one thing you should definitely keep in mind, it’s this piece of advice from Dr. Andrea Peer, our VP Product Strategy and Research…

Getting the best fit for your research involves juggling between three critical variables: representativeness, time in field and cost.

If exact representation is critical for your research questions then it will take more time in the field and often cost more. Usually when running research that requires assessment of a user’s ability to make a decision on specific things, high representation is important.

If getting results quickly is most important, then relaxing the representation criteria and being willing to pay a little more will ensure the fastest possible turn arounds. Classic usability type of questions could fall into this category.

To learn more about UserZoom’s all-in-one UX research platform, please get in touch!

Enroll in the UserZoom Academy, our free, online place of learning