How do you speed up your usability testing? We take a look at how automation can help.
There are lots of reasons to invest time in automating usability testing. When set up properly, automation will:
Sounds great. But it also requires careful planning to ensure success. With moderated testing, if the first session goes poorly, a few tweaks to the procedures or stimulus before the next session are all that’s needed to course-correct.
With automated UX testing, a lack of careful preparation and piloting could result in 50 bad sessions before you uncover the issue.
In this article, we will cover a standard usability testing process with an emphasis on required changes for automation.
Usability testing can range from evaluating designs for minor feature change(s) (A/B or multivariate testing) to benchmarking the usability of an application or site.
The purpose of your test determines the scope and specificity of your tasks. Here are a few examples:
This is where manual and automated usability testing differ the most. Unless you have an unlimited budget and no time constraints, manual usability testing is mostly limited to smaller numbers of participants and more qualitative data. With low numbers of participants, major usability issues and some insights will likely be discovered.
For critical decisions involving deciding between similar design variations or for benchmarking an application, larger numbers are required, and automated usability testing is critical to achieving those numbers efficiently (If you'd like to delve into this in more detail, check out our article What sample size do you really need for UX research?).
In most cases, you need both qualitative and quantitative data to drive design decisions while also providing actionable and relatable insights and examples of specific issues. One approach is to automate a lower number of talk-out-loud usability evaluations, while also automating a large benchmarking study. This provides the high-confidence numerical evaluations of improvement over time along with impactful clips and quotes with greater details.
With moderated usability testing, task success is sometimes subjective. If an automated test platform requires video analysis to determine task success, the determination may still be subjective. Determination of success can also be based on:
For simple testing of design iterations, this could be very brief and mainly outline the items discussed above for clarity and consensus within a research and design team.
For large benchmarking efforts, taking the time to preview test plans with stakeholders, including agreement on the tasks to be included, is critical for greater impact of the end results.
With a test plan in hand, creating the study in an automation platform will be simpler and faster.
If your platform supports recruitment, pay particular attention to all the platform’s built-in participant attributes (the targeting criteria for invitations).
For example, if your platform allows you to select participants’ employment industry, choosing the relevant industry will tailor your invitations, reduce screen-outs, and possibly eliminate or reduce your recruitment costs.
After that, use screener questions to eliminate potential participants that don’t match your intended profile. Creating segments with caps within the screener ensures that any subgroups of participants are appropriately filled.
Next, just as with a moderated test, provide some introductory text, as well as scenarios and pre-task and post-task questions. More sophisticated automated platforms may include logic to ask one set of questions after task success and another after task failure.
Learn when moderated and unmoderated testing is the right choice. Knowing which methodology to use in order to obtain the most valuable feedback and achieve your research goals is half the battle. Download this free Ebook and equip yourself with the knowledge needed to become the Sun Tzu of UX and user testing planning.
Pilot testing is important for all usability tests, but with moderated tests, any issues detected in the first few sessions can often be corrected before many participant sessions are wasted.
With automated usability tests, the pace of study completion and the quantity of participants makes any errors in setting up an automated study compound quickly. Pilot test yourself, then with colleagues, and finally with a limited number of participants.
Launching an automated study is usually simple, but there are unique considerations. While moderated studies control the timing of participant interaction, automated recruitment platforms that rapidly recruit and fill a quota may unintentionally bias results by:
These biases may be reduced by including segmentation criteria, but a more effective solution may be using an automation platform that sends invitations in waves spread out over a longer time period (such as a day).
Automating your usability studies will speed the pace of your user research and facilitate larger numbers of participants, but it requires careful planning and care throughout the process. Make sure you:
A comprehensive guide to planning, launching, managing, and analyzing UX research projects.