9 tips for automating your usability testing process

By UserTesting | May 4, 2023
Image
A man conducts a moderated user test

Usability testing is a fundamentally human process. But it can be greatly enhanced by employing automation in a thoughtful way.

There are lots of reasons for organizations to invest time in automating usability testing. When set up properly, automation will:

  • Accelerate the pace of your UX research insights
  • Increase the number of participants
  • Result in more confident design decisions 

The speed and ease of automation is a double-edged sword. It's important to exercise caution when running automated tests because a lack of careful preparation and piloting could result in 50 bad sessions before you uncover the issue. Whereas, with moderated testing, if the first session goes poorly, a few tweaks to the procedures or stimulus before the next session are all that’s needed to course-correct. 

In this article, we'll cover a standard usability testing process with an emphasis on required changes and best practices for automation. 

1: Determine the scope, purpose, and hypothesis 

Usability testing can range from evaluating designs for minor feature change(s) (A/B or multivariate testing) to benchmarking the usability of an application or site. 

  • Coordinate with your stakeholders to determine your testing scope and purpose. 
  • Create a hypothesis to provide the framework for measuring your outcome. An example of an A/B testing hypothesis might be, "Changing the color of the call-to-action button from red to green will increase conversion rates."

2: Determine the tasks pertinent to your goals

The purpose of your test determines the scope and specificity of your tasks. Here are a few examples:

  • Feature enhancements: If you're evaluating a design for a feature enhancement, you'll need to develop a scenario and very specific tasks that will cause the use of that feature. For an automated test, since a moderator is not there to keep the participant on track, the directions and tasks must be very clear. Pilot testing with colleagues is essential to spot any instructional issues.
  • Benchmarking: If you're using automation to benchmark the overall usability of a site or application, work with stakeholders to determine the critical tasks. Ideally, these tasks need to also be:
    1: Enduring: You expect these tasks to continue to be relevant to allow comparison of results across time (e.g., next quarter, next year).
    2: Pervasive: You expect the tasks to allow benchmarking across both your product and competing products

Getting clarity at this stage will also help you to determine which usability testing tools will be the most appropriate for your needs.

3: Determine the number of users

This is where manual and automated usability testing differ the most. Unless you have an unlimited budget and no time constraints, manual usability testing is mostly limited to smaller numbers of participants and more qualitative data. With a low number of participants, glaring usability issues and a few helpful insights will likely be discovered.

For critical decisions involving choosing between similar design variations or for benchmarking an application, larger numbers are required and automated usability testing is critical to achieving those numbers efficiently.

In most cases, you need both qualitative and quantitative data to drive design decisions — while also providing actionable and relatable insights or examples of specific issues. One approach is to automate a lower number of talk-out-loud usability evaluations, while also automating a large benchmarking study. This provides the high-confidence numerical evaluations of improvement over time along with impactful clips and quotes with greater details.

4: Determine what participants are needed and how to recruit them

  • Convert those participant requirements into a clear screener. If providing the screener to a panel company, this would be a simple document with clear participant acceptance criteria. 
  • For automated usability testing platforms that incorporate participant recruitment, develop a screener with appropriate logic and conditions to screen participants without making the ‘correct’ answer obvious to potential participants.

A user research team plan their usability testing process

5: Determine how you will measure task success and other metrics

With moderated usability testing, task success is sometimes subjective. If an automated test platform requires video analysis to determine task success, the determination may still be subjective. Determination of success can also be based on:

  • Participant’s selection of task success or failure. This is still somewhat subjective but can help automate analysis of larger numbers of participants.
  • Correct answers to questions based on information only available upon task success. 
  • Reaching a specified URL. This is generally preferred for automating success metrics with a large study but may not work with all prototyping tools or applications.

6: Write up the test plan

For simple testing of design iterations, this could be very brief, simply outlining the items discussed above for clarity and consensus within a research and design team. 

For large benchmarking efforts, taking the time to preview test plans with stakeholders, including agreement on the tasks to be included, is critical for greater impact of the end results. 

With a test plan in hand, creating the study in an automation platform will be simpler and faster.

7: Create your study within the automated platform

If your platform supports recruitment, pay particular attention to all the platform’s built-in participant attributes (the targeting criteria for invitations). 

For example, if your platform allows you to select participants’ employment industry, choosing the relevant industry will tailor your invitations, reduce screen-outs, and possibly eliminate or reduce your recruitment costs. 

After that, use screener questions to eliminate potential participants that don’t match your intended profile. Creating segments with caps within the screener ensures that any subgroups of participants are appropriately filled. 

Next, just as with a moderated test, provide some introductory text, as well as scenarios and pre-task and post-task questions. More sophisticated automated platforms may include logic to ask one set of questions after task success and another after task failure. 

8: Pilot test your study 

Pilot testing is important for all usability tests, but with moderated tests, any issues detected in the first few sessions can often be corrected before many participant sessions are wasted. 

With automated usability tests, the pace of study completion and the quantity of participants makes any errors in setting up an automated study compound quickly. Pilot test yourself, then with colleagues, and finally with a limited number of participants.

9: Launch your study 

Launching an automated study is usually simple, but there are unique considerations. While moderated studies control the timing of participant interaction, automated recruitment platforms that rapidly recruit and fill a quota may unintentionally bias results by:

  • Limiting participants to certain time zone and geographies 
  • Reducing diversity of gender, age, or race due to the time of fulfillment

These biases may be reduced by including segmentation criteria, but a more effective solution may be using an automation platform that sends invitations in waves spread out over a longer time period (such as a day).

Conclusion

As you can see, successful automation requires careful planning and care throughout the testing process.

To summarize, make sure you:

  • Write clear tasks to support automated testing
  • Determine the exact success metrics that can be automated
  • Carefully use the automated platform to get the right participants and accomplish your goals
  • Pilot test with yourself and colleagues to verify the procedures and stimulus
  • Launch carefully to reach your intended participants

If you follow these steps, you'll set yourself up for faster, easier results through automated usability testing. 

Related reading:

Usability testing examples

Tools for usability testing

 

Insights that drive innovation

Get our best human insight resources delivered right to your inbox every month. As a bonus, we'll send you our latest industry report: When business is human, insights drive innovation.

About the author(s)
UserTesting

With UserTesting’s on-demand platform, you uncover ‘the why’ behind customer interactions. In just a few hours, you can capture the critical human insights you need to confidently deliver what your customers want and expect.