The recent proliferation of voice user interfaces (voice UI) in human computer interaction has thrown a wrench into the machine of digital experience.
With the boom of smart appliances there’s now seemingly endless pressure to make your entire home ‘smart’ by connecting every common home appliance to the internet. Fridges, TVs, doorbells; there are even vanity mirrors now that are a part of the Internet of Things (IOT).
Honestly, it’s becoming harder every day to find ‘dumb’ appliances, and one day it is possible that an appliance without access to the internet will be a rarity. And in case you were wondering if people have noticed this recent shift, you have companies like Progressive Insurance making humorous commercials about it.
10 years ago who would have believed that the IOT would be so ubiquitous? And yet, here we are, talking to appliances like we’re in The Jetsons. Rather, that’s the goal – we still have a long way to go to optimize voice user interface experiences to get to that level.
All smart appliances come with the capability to be controlled via a mobile device, but some companies want to extend this control to your voice, as well. Amazon Echo, Google Home, Apple’s Siri, or any of the other smart assistants currently on the market are all designed to respond to voice commands as well as integrate with a variety of third party smart devices.
But this proliferation of voice control in human computer interaction has thrown a wrench into the machine of digital experience. This is due to the unique challenges that come with designing technology to be controlled by a voice rather than more traditional interfacing methods. The variation in how people communicate culturally, physical differences like voice volume, and many other variables will all contribute to how voice user interfaces should be designed.
However, the process of understanding and providing solutions to the challenges associated with creating a voice user interface is not different from understanding and solving challenges associated with traditional interfaces. The answer is to conduct user experience research!
That’s why we have created these tips on how to conduct UX research utilizing voice user interfaces with UserZoom.
The capability to run mixed-methods studies with virtual assistants already exists in UserZoom. Some example studies that can currently be created to begin collecting data from people interacting with smart assistants are:
If you want to compare the experience of using smart assistants from different manufacturers, you will be best served creating a think-out-loud navigation task using an Advanced UX Research project and making sure that your participants have their Echo, HomePod, or Google Home device close at hand.
If you are more interested in understanding how people use their smart assistants in a natural environment you will be best served creating a mobile study using video questions.
Create a message similar to the one above when writing tasks for smart assistants:
“For this next task you will be asked to interact with Alexa using the Echo Dot. Here are a few tips for completing this task:
The reason for this example being that you want to make sure participants are as prepared as possible to give helpful feedback. Some other things that you may want to add into a message like this are:
When writing task instructions and creating the scenario, make sure to instruct participants in such a way that they do not accidentally trigger the smart assistant before they intend to if they decide to read your instructions out loud.
For example, in the copy below you can see that by reading the scenario or the task aloud the participant’s Echo won’t be triggered:
Your Scenario: You have just started getting ready for your day. You want to know what the weather will be like so you can plan your outfit accordingly.
Your Task: Use your Echo to find out what the weather will be like today.
When creating tasks, make sure to take into account that people will vary in how they ask their smart assistant for things, both in syntax and choice of words.
Take advantage of follow-up questionnaires to probe further into the experience participants had using their smart assistant.
In a sample study using smart assistants, we set open text questions to show up when participants reported a rating of ease or satisfaction as 4 or less in order to better understand what exactly contributed to a low rating.
This gives participants a chance to answer a directed question about their thoughts on what could be improved about interacting with a specific smart assistant.
#6 Incorporate tasks specifically for enabling and disabling skills for your study
If testing a ‘skill’ (an app created to integrate with third party developers) on Alexa or Google Home, ensure that you do not instruct participants to either enable a skill they already use on their device OR use a skill on their device they have not already enabled.
Failure to provide logical directions for the use of skills on these devices will cause confusion and lead to a high rate of failure.
If testing in a laboratory setting, make sure to reverse any changed settings that occurred on your test device over the course of your testing session before the next session begins.
If you are asking participants to use their own device for remote testing, include a task with specific instructions about how to disable any skills and revert back to previous settings changed for the purpose of your study.
All a participant in your study needs is a computer, so that they can run the UserZoom study and the smart assistant of your choice. No extra materials are necessary to run a desktop moderated study using virtual assistants!
What the setup of this study might look like for a participant. Google Home Mini pictured.
Have questions about testing voice user interfaces? Want to know how to find participants or wondering if you set up your study correctly?
UserZoom customers can reach out to us (via your Research Partner) if you need answers to any questions like these!
Here’s a sample video of this kind of study from a participant’s perspective. We chose a black screen as the stimuli here but you can use any kind of hosted image or website.
As voice UI’s become more and more ubiquitous (please no smart toilets, we have to draw the line somewhere), UXers, product managers and anyone else involved in tailoring the experience can and should conduct these kinds of studies. With that in mind, we hope you find this article helpful!