A comprehensive guide to remote unmoderated usability testing

All you need to know about transitioning to remote research.

As most of the UX world is transitioning out of the lab and into exclusively remote methods of running research and testing, it’s worth exploring unmoderated usability testing as a flexible and versatile method to add to your remote UX toolkit. 

Remote unmoderated testing is known for its economy with time and money in comparison to lab-based research, and it can provide statistically valid quantitative data, as well as immediacy to usability issues with videos containing the user’s qualitative and behavioral feedback.

In this comprehensive guide, we’ll take you through the following steps:

  1. What is remote unmoderated testing?
  2. What technology do I need to run remote unmoderated research?
  3. How to design a remote unmoderated usability study
  4. Tips for writing effective tasks for remote unmoderated studies

1) What is remote unmoderated testing?

Remote unmoderated testing is a method for testing websites, prototypes and mobile applications using online software or a dedicated remote user research platform.

The software enables the automated collection of quantitative, qualitative, and behavioral feedback from participants in their own natural environment using their own computer or device.

Because the data collection is remote, asynchronous, and automated, results can be collected quickly, making the methodology well suited for an agile development environment.

This also means the tests are unobserved, so the participant is left alone to complete tasks without the presence of a moderator.

As well as usability tests, other unmoderated methods include (but are not limited to):

Other benefits of unmoderated testing

As well as saving time, money and the need to leave the house, unmoderated testing has a number of other unique benefits:

  • Since a majority of the study is being run automatically with multiple users, you can get more responses in a shorter amount of time
  • Allows you to collect statistically significant amounts of data that give you high confidence in your decisions
  • Gives you access to people over a much broader geographical location
  • Enables the democratization of research at organizations by letting multiple teams conduct research engagements without needing to be trained as a moderator

2) What technology do I need to run remote unmoderated research?

If you’ve been running nothing but in-lab research, you may be overwhelmed by the online options you that are available to you at varying price-points and capabilities.

Often you may find that a cheaper dedicated UX testing tool will suffice, but you’re having to use a variety of other unintegrated tools to make it work for your needs. Or perhaps there’s a solution that does everything for you (online usability tests, screen-share technology, automated transcription and reporting) but is just out of your budget-range.

If these issues apply to you and your current situation, may we humbly recommend our UX research solution buyer's guide. I contains everything you need to consider when searching for a UX tool that helps you create user-focused digital experiences that drive business goals and ROI.

UX research solution buyer's guide

6 questions to ask before you buy a UX research solution for your business.

3) How to design a remote unmoderated usability study

To get started with designing a remote unmoderated usability test, first develop a project plan to document what you want to learn from the study and how you’d like to obtain that information.

Having the information organized and documented in advance will enable you to set up a study quickly and reduces the likelihood of forgetting something vital.

Here’s everything you need to know…

Step 1: Determine the research goals and objectives

What questions do you have and what do you want to get out of the research?

Here are a few examples of research goals:

  • Identify usability issues of the current website.
  • Discover if users can find relevant product information.
  • Determine if users understand the value proposition of the product.

Your study will need to be designed in such a way that enables to you to answer these questions. The tasks and follow-up questions will need to be aligned with your study goals.

Step 2: Define your target audience

Who uses your website? What is the profile of the typical user? Take into consideration characteristics, such as gender, age and income. Also take into account other criteria, such as certain behaviors and actions.

For usability tests, you do not necessarily need to test with your exact audience to uncover problems with your site.

For example, if you’re conducting research for a bank with branches located in a few states in the east coast and you wanted to assess the experience of applying for a mortgage, you don’t need the bank’s actual customers who need a mortgage from those exact states.

Trying to find users who meet those exact criteria can be difficult, time consuming, and costly. You could loosen the criteria to still get the right type of person who would approach the tasks with a similar mindset as actual customers.

To open up the criteria more in the above example, you could include non-customers, across all states, who are likely to apply for a mortgage within the next year or have applied for a mortgage in the past year. Broadening the recruiting criteria a bit will reduce cost and increase the speed of data collection.

Recruiting time and cost is dependent upon the following factors:

  • Incidence rate – How common is the type of person you’re trying to recruit? Using the earlier example, you would have to consider how many customers does the bank have and what percentage of those customers are looking for a mortgage. Next, consider how many of those customers are part of an online panel and are ready and willing to take your study?
  • Type of Study – Some studies are easier to take than others. Usability studies that require participants to Think Out Loud, for example, are harder to recruit for because they require more from the participant. Not all participants are willing or capable of articulating their thoughts well while interacting with a site.
  • Study Length – On average, how long will it take for participants to complete the study? To minimize participant fatigue, studies on desktop should take no more than 20 minutes to complete, while mobile studies should be no more than 15 minutes. This may also be known as length of interview (LOI) when working with panel companies.
  • Sample size – How many completes do you need? For qualitative Think-Out-Loud studies, 5-10 participants is sufficient in identifying common usability issues. For larger quantitative studies or if you want to uncover less common, but potentially serious issues, you would need a larger sample size. The number of participants you decide would depend on your study goals and the level of confidence and margin of error you and your organization is comfortable with.

Step 3: Create a screener

Once you have a target audience in mind, write screener questions that will help narrow down the pool of potential participants to the type of participants you need for your usability test. Avoid leading questions. Screen out anyone who does not fit your target profile.

Here’s a screener you could potentially use for the earlier example – participants interested in a mortgage or have applied for a mortgage in the past year:

What is your age?

  • Under 18 [SCREENOUT]
  • 18 – 24 [SCREENOUT]
  • 25 – 29
  • 30 – 39
  • 40 – 49
  • 50 – 59
  • 60 +

Which of the following activities do you plan to do in the next 12 months? [Check all that apply]

  • Purchase a home [Must select to continue] 
  • Purchase a car
  • Travel internationally
  • Buy a new phone
  • Change jobs
  • Delivering a baby
  • Attend a concert
  • Go to an amusement park
  • None of the above [SCREENOUT]

If participants selected “Purchase a home” ask:

When you purchase a home, do you plan on applying for a mortgage?

  • Yes
  • No [SCREENOUT]

If participants did not select “Purchase a home” ask:

Which of the activities did you do in the past 12 months? [Check all that apply]

  • Purchased a home [Must select to continue] 
  • Purchased a car
  • Traveled internationally
  • Bought a new phone
  • Changed jobs
  • Delivered a baby
  • Attended a concert
  • Went to an amusement park
  • None of the above [SCREENOUT]

If participants selected “Purchased a home” ask:

When you purchased a home, did you apply for a mortgage?

  • Yes
  • No [SCREENOUT]

When writing screeners, keep in mind the following:

  • Avoid leading the participants. In their eagerness to participate in studies, participants may anticipate the answers you’re looking for. An example of leading question is, “Will you apply for a mortgage in the next 12 months?”
  • Do not include unnecessary questions. Screeners that are too long result in participants spending too much time qualifying for the study, getting fatigued and not participating in the research.  Demographic, psychographic questions and behavioral questions that you’re not using for screening purposes could be placed in the Introductory section or at the end of the study.

Step 4: Add introductory questions

Introductory questions should be added to a ‘welcome page’ designed to set expectations for the study. Include information like approximate study length, number of tasks they’ll need to perform and if they need to talk-out-loud.

Simple demographic, psychographic, and behavioral questions can also be asked after the welcome page.  These types of questions are great for getting participants warmed up before having them perform usability tasks and can also be used segmentation or filtering purposes later.

It’s also not uncommon to include pre and post-site experience questions to see how participants’ experience on the site impacts their ratings afterwards. Pre-site experience questions can be included in the introductory questions and asked again at the end of study. Examples might include brand perception and brand attributes.

Step 5: Determine the tasks

Keeping in mind the study goals and the recommended study length, outline the tasks you’d like participants to perform. Estimate how long you think the average user would take to perform each task. The tasks you decide on may be based on what you’d like to learn and the top tasks a particular type of user would typically perform on your site.

When you write the tasks, be sure to provide the context around the task so participants approach the task with the right perspective. There are more tips on writing effective study tasks further down the page.

Step 6: Decide on the task follow-up questions

These questions should be designed to help answer your research question and meet your study goals.  Keep in mind that a variety of information can be automatically collected in remote usability tests including task success, time on task, number of clicks, video, audio think out loud, clickstreams, and heatmaps.

However, don’t rely on a single data point, survey questions after each task can help provide you with a richer understanding of the user experience. A combination of different types of behavioral and attitudinal data can help triangulate on the user experience.

Close-ended survey questions can be useful in understanding participants’ attitudes and perceptions. They can also aid with analysis by quantifying responses. Rating scales can capture information about task difficulty, feedback on content, features and perception of a product or landing page. Multiple choice questions around task problems and frustrations can help quantify issues.

Filters can be created from specific close-ended responses to help isolate and understand why a particular issue is occuring by looking at other data points, such as time, success, and task videos. Filters can also be used to understand how differences and similarities between different user segments.

Open-ended survey questions allows participants to provide free-form text responses about their task experience. Even if you collected think-out-loud data as participants are interacting with the site and performing the task, it’s useful to have some open-text response questions that allow participants to reflect on their experience. Some participants may not be able to articulate their thoughts out loud in the moment and need time to reflect about their experience afterwards.

Step 7: Add final questions

While task-follow-up questions focus on the experience participants have when performing the task, post-site experience questions are good for understanding how participants felt about their overall experience with the site.

Common final questions may include SUS (System Usability Scale), NPS (Net Promoter Score), Qx Score, SUPR-Q, Satisfaction, and Call-to-Action measures (e.g., likelihood to sign up, likelihood to make an appointment). You’ll find more information on these scores in our article on UX KPIs and metrics.

Be sure to include any questions you had planned on asking pre and post-site interaction to determine if there’s a positive or negative change after participants performed the usability tasks.

4) Tips for writing effective tasks for remote unmoderated studies

Another major benefit of conducting remote unmoderated research is that tasks and questions can be asked consistently without any potential moderator bias.

However, with no moderator, writing clear and effective tasks is vital as there’s no opportunity for clarification. If participants don’t understand what they need to do, you won’t get great data.

With that in mind, here are six tips to keep in mind while writing tasks for your remote unmoderated research studies that will enable you to obtain the best results possible.

Avoid leading the participant

Use general terms that the participants would normally understand and use without leading them.  For example, if you’d like to test the registration process for a newsletter and the site uses the term ‘register’, you could ask them to ‘sign up’ instead.

Be Direct

Participants need to be crystal clear about what you would like them to do. Ambiguous tasks leads to participant confusion and consequently bad data.

If participants need to find information, tell them. If you’d like to focus on the navigation and do not want them to search, then instruct them accordingly.

Tell them to talk out loud during the study, if you’d like to capture their verbal feedback. Although they may have been informed at the beginning of the study, a brief reminder during the tasks that require think-out-loud can’t hurt.

Avoid framing the task as a question. With think-out-loud studies, participants may just verbally respond rather than actually interacting with the site causing you to miss out on clickstreams, heatmaps and videos that could have been captured.

Consider these two examples…

Example one:

Bad example: “Where would you go to contact someone for more information?”

This is too prescriptive and you’ll receive limited feedback. Instead try this…

Good example: “Imagine that you’re interested in learning more about the services provided by COMPANY X. Find the options available for contacting the company for more information.”

Example two:

Bad Example: “Your name is Joseph Smith. You’d like to get an appointment to speak to a doctor about back pain you have been having lately, preferably over the phone if possible without having to go in. Your phone number is 555-555-1212.”

In the example above, participants are given information about their circumstances, but not explicitly instructed on what they need to do in the task. Try this instead…

Good example: “You’d like to get an appointment about back pain. You prefer to have a phone appointment and not go in if possible. Using the following information, start the process of setting up a phone appointment for your back pain. STOP before actually setting up the appointment.

Name: Joseph Smith

Phone Number: 555-555-1212”

Provide context 

In order to provide good feedback, study participants need to have the appropriate mindset and relevant information. Instructions must include context. Remember these participants are not necessarily going to be familiar with your products or services prior to taking the study. You need to set the stage for them.

The context can be provided at the beginning of the study or in the task instructions.  If participants are interacting with a prototype that might be slow, not fully built out etc., Let them know that they’ll be evaluating a prototype.

Provide all the information participants need to complete the task

If participants need to log in to interact with a site or with a prototype, provide them with the login and password. If they need to fill out form fields, provide them with specific information they can use or let them know that they can use fake information if the form allows for that.

If personal information is needed (e.g., social security number, credit card, email), provide a fake personal information that will be accepted by the site.

Be clear and concise

Avoid using unnecessary words. Use language that participants understand; don’t use unnecessary jargon. Don’t use acronyms without first explaining what the acronyms stand for.

Make tasks easy to consume

Even with clear and concise language, it can be difficult for participants to understand what they need to do with the limited real estate available in the taskbar.

  • Use paragraph breaks for mobile tasks.
  • Use bullets and numbering.
  • Bold, italicize, underline, use caps or different colored text to highlight key information.

The concept of ‘garbage in, garbage out’ applies to conducting research. Writing straightforward and concise tasks is essential to getting valuable insights. Participants should have a clear understanding what they need to do, as well as the context behind the tasks. Otherwise tasks aren’t interpreted correctly and you won’t get the data you need.

Summary

Developing a project plan with clearly defined study goals and recruiting plan is essential to conducting a good remote unmoderated usability study.

  • Outline the framework of the study (i.e., Introductory Questions, Tasks, Task Follow-Up Questions and Final Questions) and fill in the details later.
  • Determine your recruiting criteria. Remember, to identify usability issues, you do not necessarily need your exact audience. It’s often possible to broaden the criteria to get the right type of participants, saving you money with recruiting costs and time.
  • Keep the participant study length within 20 minutes for desktop and 15 minutes for mobile.

With remote, unmoderated studies there won’t be an easy opportunity to probe participants later. Clearly planning ahead and anticipating what the participant experience is like and formulating appropriate questions along with data that is automatically collected (e.g., videos, audio, heatmaps) helps provide rich insight into the user experience.