It’s great to sample from your customer base when running user research, but please don’t stop there!
Imagine you are on a product team about to embark on a new project — a project to make design updates to a fitness tracker app. (And maybe this is not unlike your everyday work life. So, please, feel free to make substitutions in this example for a Different Product.) This fitness app is sort of popular, but there hasn’t been any solid user experience (UX) research done in a while and it definitely could use some improvement.
Cool. Time to roll up those research sleeves! I imagine you might start by looking at existing UX metrics — whatever you’ve got in terms of analytics, customer surveys, and ratings.
And what now? You might interview some customers, continue to identify features that need improvement, and create prototypes to test new designs with some of your users.
Unfortunately, you’ve forgotten an important thing! But you aren’t alone. UX teams often conduct research with existing customers because that’s easiest — and, of course, customer input is valuable. (So, please pat yourself on the back for that!) But unless you also recruit participants who do not use the product, you miss out on potential insights on what keeps those people from becoming users to begin with.
Proper UX testing requires that you test a product with representative users. “Representative users” includes not only people who are already customers, but also those who could reasonably become users, or even those who previously were users.
It’s easy to forget that there’s an inherent bias in testing designs with customers. These folks are already using the product, so that means they probably already somewhat like it! (…in a consumer context, anyway. In an enterprise context, well, they might not have a choice. But for your sake, I hope they like it fine!)
It’s valuable to find out what users like (and don’t like) about a product, but current users are a slice of a bigger pie.
Because customers already use the product, they are also biased in that they have a set expectation. Users notoriously tend to dislike interface changes at first, until they get used to it. It’s called change aversion.
Even if a new design is better for learnability and long-term productivity, user reactions might be muddled if they’re accustomed to “their” existing version of the product. In our fitness tracker app example, users might not initially like it if the graphs and summaries look different from what they are used to.
A new interface might also slow existing users because, at first, they might feel lost because they are used to a different design. They might be slower than if the interface were completely new. User metrics over a long period of time are important because initial use of a new design or feature by existing users might present skewed results. People need time to develop new routines with the product.
When you are testing new designs, remember that users’ negative reactions could indeed be related to the design shortcomings you want to identify with testing, but they could also be related to the design feeling different and new.
Non-customers can tell you why they don’t use the product, or why they no longer use the product, or why they’ve never heard of the product.
Your existing analytics won’t help you too much with this, at least not in detail. Reaching out to non-users can help you learn about why people choose not to use the product, or why they might not even know your product could help them.
Or, your product might not be accessible to everyone. Is your fitness app useful for folks who are in physical therapy, or folks with disabilities?
To test UX with non-users, make a list of who you’d like to hear from and build a strategy for finding them. Here are some possible groups of non-users of our fictional fitness app:
Try to recruit research participants who fall into these groups you’ve listed. To make sure a participant is not a user, send participants a screener survey before the research session.
In the screener survey, ask potential participants how familiar they are with the product being tested, and also ask any other questions you have related to the non-user groups you’ve identified. That way, you can disqualify people and avoid wasting both of your time.
It takes a little extra planning beyond reaching out only through existing customer channels, but once you incorporate non-users in your research strategy, you will gain a wider understanding of how you can improve product design.