When you have the full blessing of your stakeholders to run some user research on your product, and you know the areas in which you want to test and/or improve (all fully aligned to business goals, of course) you’ll then have to decide whether to run your chosen studies moderated or unmoderated.
Within this debate, you’ll also have to figure out why one might be better suited than the other in your specific context, and when both would complement each other very nicely.
Is the distinction between moderated and unmoderated as difficult to grasp as figuring out the difference between flammable and inflammable? Possibly not, but whether you’re trying to surface valuable user insights or set fire to your latest failed DIY project, it’s good to have these things straight in your mind so you can access the right tool (i.e. card sort or bottle of paraffin) as quickly as possible.
The difference between ‘moderated’ or ‘unmoderated’ is quite simply whether a researcher (a moderator) is going to be present during the test, or whether the test participant is left to carry out the task without anybody else in the room (or on the other end of a computer).
But if that’s too tl;dr, let’s dig deeper into the benefits and challenges of both unmoderated and moderated:
Unmoderated tests are how we describe unobserved tests, where a participant is left alone to complete tasks without the presence of a moderator. These sessions can be recorded for later viewing as part of a qualitative study (the results are the spoken out-loud thoughts and feelings of the participants), or the data is collected and analysed as part of quantitative research (cold hard numerical based results – how many, how often or how much).
Typical unmoderated tests include (but are not limited to):
Also bear in mind that many of these can also be run as moderated sessions. Ah UX testing, you flexible, wonderful thing.
In moderated testing, the participants are observed by a moderator, either in-person or remotely via computer.
The key reason for running moderated sessions is so that you can be in a live setting with a participant. This allows you to have a conversation with your users as you’re observing what they are doing to better understand their behavior and dig deep into usability issues and attitudes.
Plus, you can modify your test script on the fly to probe more if there are confusing areas and ask ad-hoc follow-up questions.
Speaking of ad-hoc questions – another advantage to conducting moderated sessions is that you can have stakeholders and teammates anonymously observing the sessions. This is a great way to include people who are not usually involved in usability testing to observe the process firsthand, which can foster empathy with users and reinforces the need for further user research.
Typical moderated tests include (but are not limited to):
Again, some of these can also be run unmoderated. Apart from the Interviews, that would be uh… weird.
A moderated card sort
Luckily with both unmoderated and moderated studies, the UX metrics you can measure can be behavioural (what they do – task success, task time, abandonment rate) and/or attitudinal (what they say – usability, credibility, appearance).
So you can run a basic usability test, collecting behavioural and attitudinal data with a moderator, or it can all be recorded automatically along with the user’s spoken-out-loud thoughts and feelings by screen recording software.
Again, this is methodology dependent, as not every testing method can be run both moderated and unmoderated.
Basically if you want the ability to probe and ask in-the-moment questions, and speed and statistical significance aren’t an issue, then moderated could be the choice for you.
However if you want to scale your user research and speed up the time to insights, then unmoderated is a sure fire way of achieving this.
Hand on heart, we would genuinely recommend a blend of both methodologies, and it really is determined by context.
If you have a real sticking point in your customer journey, and you need to drill down to where that is and why users are getting stuck, then running a usability test where you can observe and interview a smaller group of relevant users may be the most effective.
However to validate these proposed changes or improvements, you could then send out a survey to hundreds of users to back up the insight with statistically significant quantitative data.