How to design a Remote Unmoderated Usability Test
Remote unmoderated usability testing is a flexible and versatile methodology that caters to various research needs.
It’s known for its economy with time and money in comparison to a lab-based study, and it can provide statistically valid quantitative data, as well as immediacy to usability issues with video, qualitative, and behavioral data.
In the following article, we’ll take you through the seven steps you need to follow to create an effective project plan for all remote unmoderated usability studies.
What is a remote unmoderated research?
Remote unmoderated research is a method for testing websites, prototypes and mobile applications using a software tool.
The software enables the automated collection of quantitative, qualitative, and behavioral feedback from participants in their own natural environment using their own computer or device.
Because the data collection is remote, asynchronous, and automated, results can be collected quickly making the methodology well suited for an agile development environment.
How to run a remote unmoderated usability test
To get started with designing a remote unmoderated usability test, first develop a project plan to document what you want to learn from the study and how you’d like to obtain that information.
Having the information organized and documented in advance will enable you to set up a study quickly and reduces the likelihood of forgetting something vital.
Here’s everything you need to know…
Step 1: Determine the research goals and objectives
What questions do you have and what do you want to get out of the research?
Here are a few examples of research goals:
- Identify usability issues of the current website.
- Discover if users can find relevant product information.
- Determine if users understand the value proposition of the product.
Your study will need to be designed in such a way that enables to you to answer these questions. The tasks and follow-up questions will need to be aligned with your study goals.
Step 2: Define your target audience
Who uses your website? What is the profile of the typical user? Take into consideration characteristics, such as gender, age and income. Also take into account other criteria, such as certain behaviors and actions.
For usability tests, you do not necessarily need to test with your exact audience to uncover problems with your site.
For example, if you’re conducting research for a bank with branches located in a few states in the east coast and you wanted to assess the experience of applying for a mortgage, you don’t need the bank’s actual customers who need a mortgage from those exact states.
Trying to find users who meet those exact criteria can be difficult, time consuming, and costly. You could loosen the criteria to still get the right type of person who would approach the tasks with a similar mindset as actual customers.
To open up the criteria more in the above example, you could include non-customers, across all states, who are likely to apply for a mortgage within the next year or have applied for a mortgage in the past year. Broadening the recruiting criteria a bit will reduce cost and increase the speed of data collection.
Recruiting time and cost is dependent upon the following factors:
- Incidence rate – How common is the type of person you’re trying to recruit? Using the earlier example, you would have to consider how many customers does the bank have and what percentage of those customers are looking for a mortgage. Next, consider how many of those customers are part of an online panel and are ready and willing to take your study?
- Type of Study – Some studies are easier to take than others. Usability studies that require participants to Think Out Loud, for example, are harder to recruit for because they require more from the participant. Not all participants are willing or capable of articulating their thoughts well while interacting with a site.
- Study Length – On average, how long will it take for participants to complete the study? To minimize participant fatigue, studies on desktop should take no more than 20 minutes to complete, while mobile studies should be no more than 15 minutes. This may also be known as length of interview (LOI) when working with panel companies.
- Sample size – How many completes do you need? For qualitative Think-Out-Loud studies, 5-10 participants is sufficient in identifying common usability issues. For larger quantitative studies or if you want to uncover less common, but potentially serious issues, you would need a larger sample size. The number of participants you decide would depend on your study goals and the level of confidence and margin of error you and your organization is comfortable with. More information can be found here on determining sample sizes.
Step 3: Create a Screener
Once you have a target audience in mind, write screener questions that will help narrow down the pool of potential participants to the type of participants you need for your usability test. Avoid leading questions. Screen out anyone who does not fit your target profile.
Here’s a screener you could potentially use for the earlier example – participants interested in a mortgage or have applied for a mortgage in the past year:
What is your age?
- Under 18 [SCREENOUT]
- 18 – 24 [SCREENOUT]
- 25 – 29
- 30 – 39
- 40 – 49
- 50 – 59
- 60 +
Which of the following activities do you plan to do in the next 12 months? [Check all that apply]
- Purchase a home [Must select to continue]
- Purchase a car
- Travel internationally
- Buy a new phone
- Change jobs
- Delivering a baby
- Attend a concert
- Go to an amusement park
- None of the above [SCREENOUT]
If participants selected “Purchase a home” ask:
When you purchase a home, do you plan on applying for a mortgage?
- No [SCREENOUT]
If participants did not select “Purchase a home” ask:
Which of the activities did you do in the past 12 months? [Check all that apply]
- Purchased a home [Must select to continue]
- Purchased a car
- Traveled internationally
- Bought a new phone
- Changed jobs
- Delivered a baby
- Attended a concert
- Went to an amusement park
- None of the above [SCREENOUT]
If participants selected “Purchased a home” ask:
When you purchased a home, did you apply for a mortgage?
- No [SCREENOUT]
When writing screeners, keep in mind the following:
- Avoid leading the participants. In their eagerness to participate in studies, participants may anticipate the answers you’re looking for. An example of leading question is, “Will you apply for a mortgage in the next 12 months?”
- Do not include unnecessary questions. Screeners that are too long result in participants spending too much time qualifying for the study, getting fatigued and not participating in the research. Demographic, psychographic questions and behavioral questions that you’re not using for screening purposes could be placed in the Introductory section or at the end of the study.
Step 4: Add Introductory Questions
Introductory questions should be added to a ‘welcome page’ designed to set expectations for the study. Include information like approximate study length, number of tasks they’ll need to perform and if they need to talk-out-loud.
Simple demographic, psychographic, and behavioral questions can also be asked after the welcome page. These types of questions are great for getting participants warmed up before having them perform usability tasks and can also be used segmentation or filtering purposes later.
It’s also not uncommon to include pre and post-site experience questions to see how participants’ experience on the site impacts their ratings afterwards. Pre-site experience questions can be included in the introductory questions and asked again at the end of study. Examples might include brand perception and brand attributes.
Step 5: Determine the tasks
Keeping in mind the study goals and the recommended study length, outline the tasks you’d like participants to perform. Estimate how long you think the average user would take to perform each task. The tasks you decide on may be based on what you’d like to learn and the top tasks a particular type of user would typically perform on your site.
When you write the tasks, be sure to provide the context around the task so participants approach the task with the right perspective. Here are some further tips on writing effective study tasks.
Step 6: Decide on the task follow-up questions
These questions should be designed to help answer your research question and meet your study goals. Keep in mind that a variety of information can be automatically collected in remote usability tests including task success, time on task, number of clicks, video, audio think out loud, clickstreams, and heatmaps.
However, don’t rely on a single data point, survey questions after each task can help provide you with a richer understanding of the user experience. A combination of different types of behavioral and attitudinal data can help triangulate on the user experience.
Close-ended survey questions can be useful in understanding participants’ attitudes and perceptions. They can also aid with analysis by quantifying responses. Rating scales can capture information about task difficulty, feedback on content, features and perception of a product or landing page. Multiple choice questions around task problems and frustrations can help quantify issues.
Filters can be created from specific close-ended responses to help isolate and understand why a particular issue is occuring by looking at other data points, such as time, success, and task videos. Filters can also be used to understand how differences and similarities between different user segments.
Open-ended survey questions allows participants to provide free-form text responses about their task experience. Even if you collected think-out-loud data as participants are interacting with the site and performing the task, it’s useful to have some open-text response questions that allow participants to reflect on their experience. Some participants may not be able to articulate their thoughts out loud in the moment and need time to reflect about their experience afterwards.
Step 7: Add Final Questions
While task-follow-up questions focus on the experience participants have when performing the task, post-site experience questions are good for understanding how participants felt about their overall experience with the site.
Common final questions may include SUS (System Usability Scale), NPS (Net Promoter Score), Qx Score, SUPR-Q, Satisfaction, and Call-to-Action measures (e.g., likelihood to sign up, likelihood to make an appointment). You’ll find more information on these scores in our article on UX KPIs and metrics.
Be sure to include any questions you had planned on asking pre and post-site interaction to determine if there’s a positive or negative change after participants performed the usability tasks.
Developing a project plan with clearly defined study goals and recruiting plan is essential to conducting a good remote unmoderated usability study.
- Outline the framework of the study (i.e., Introductory Questions, Tasks, Task Follow-Up Questions and Final Questions) and fill in the details later.
- Determine your recruiting criteria. Remember, to identify usability issues, you do not necessarily need your exact audience. It’s often possible to broaden the criteria to get the right type of participants, saving you money with recruiting costs and time.
- Keep the participant study length within 20 minutes for desktop and 15 minutes for mobile.
With remote, unmoderated studies there won’t be an easy opportunity to probe participants later. Clearly planning ahead and anticipating what the participant experience is like and formulating appropriate questions along with data that is automatically collected (e.g., videos, audio, heatmaps) helps provide rich insight into the user experience.
Ann is an expert in conducting remote online, user experience studies with over 15 years of experience providing research consulting services from startups to Fortune 500 companies across a variety of verticals. Prior to joining UserZoom, Ann has led hundreds of customer engagements and managed a team of researchers at Keynote Systems.