Our resident benchmarking expert and Senior Director of UX Research, Dana Bishop, answers some of the most pressing UX benchmarking questions.

In the second part of our benchmarking webinar mini-series, Conducting Quick, Cost-Effective UX Benchmarking at Scale, Dana discusses how how to put everything you learned during last week’s webinar into practice and nail your first (or second) benchmark! During this session, Dana also answered some of the top questions we’re often asked when rolling out UX benchmarking for our customers.

You’ll find these replies below, as well as a peek at our qxScore, a single-metric scoring solution that gives a blended quantitative and qualitative overview of your user experience. The full-length webinar explains this in much greater detail and is available to watch now!

Why benchmark a user experience?

Benchmarking UX is critical because you’re creating a baseline of usability metrics that we can compare over time to understand how you are progressing, therefore identifying problem areas for improvement and building a vision for future releases.

Different journeys or elements can be developed by different teams so it’s difficult to ensure that the overall journey is looked after. So benchmarking can also help to measure consistently and holistically across multiple teams, products and business units.

And then once you create a baseline, you then begin retesting to understand the success of changes, with the support of real metrics, dashboards and the results. You’ll then be continually optimizing your key customer journeys using a consistent approach to testing.

What is competitive benchmarking?

Competitive benchmarking helps you keep a close eye on business-critical key performance indicators (KPIs) across your industry. You’ll understand the success of your competitor or competitors, and understand users’ perceptions of your product within the context of those competitors.

Competitive benchmarking studies can also be very compelling to stakeholders outside of UX, helping to get broader buy-in by showcasing the value of your own UX research against experiences provided by your competitors.

Who should be benchmarking?

The most obvious answer is UX researchers. But really any team trying to establish UX strengths and weaknesses, or trying to figure out how to improve a product or develop a holistic view of the customer journey. Also product marketing teams who are trying to establish competitive insights and differentiators.

What can you benchmark?

Anywhere your users and customers are accessing your site or product online.

This applies to both external customer-facing sites and internal sites or intranets, because you really want to ensure that your online properties are streamlined and optimized not only for your customers but also internal-facing users and employees. We’ve run a considerable number of benchmarks on intranets and internal sites, sometimes they’re the forgotten digital properties.

When should I start benchmarking and at what cadence?

I get this question a lot. I’m gonna start by saying you could begin anytime. You can be doing from early concepts to live working product, every iteration that comes before or follows. There are a number of typical triggers in consideration when benchmarking, so the best advice I can give though is don’t wait. Start now. Start measuring where you are today.

As for how often, I work with many different customers who have different intervals in which they run their benchmarking. Sometimes it’s annual, sometimes biannual, sometimes even quarterly or monthly. It really depends on your development cycle and how often you’re rolling out design changes.

I also strongly recommend not only testing at regular intervals but testing before or after launch of a major update or redesign. It may not fall within that regular interval schedule, but you don’t want to find out something is not working six months after you launched it because that’s when your timetable says you’re due for another round of tests.

Why conduct UX benchmarking online?

It’s fast and convenient and cost-effective. You can conduct large scale tests using a remote unmoderated testing methodology without the need for expensive lab equipment. Another added benefit is wide geographic reach. You can go beyond the borders of your own location.

If you’re working with UserZoom in particular, we’re flexible in the way that we work with you. We offer a range of benchmarking services that can support your efforts whether you prefer a DIY approach, or full-service engagement.

One of the ways that we support is study templates, which can be used when creating your first benchmark study. You can then customize it if you choose, and ise it as your template for all future benchmark studies allowing you to quickly and easily run test iterations.

Another big benefit is quick results. As soon as your study is launched you can begin to collect results in real time allowing you to see feedback as participants are completing your study from the comfort and convenience of your own desk.

How to select KPIs?

I always recommend choosing usability metrics that are important for your organization; ones that can be tied back to regular reported business metrics.

UserZoom offers a single-score solution called the qxScore (which stands for ‘quality of experience score’) which aggregates a composite of behavioral metrics and attitudinal data that we collect when running benchmarking.


We heard many of our customers saying, “We have no internal measures for UX so we’re not able to provide any value with benchmarking.” The qxScore grew out of this need for our customers to implement consistent internal measures that they could track over time.

Other customers say, “Oh, we already track metrics. We track NPS and abandonment of the purchase process.” Those are obviously very valuable metrics to track. But that these measurements can sometimes leave you with only half the story. They tell you what’s happening but not the why.

So the qxScore is a combination of behavioral and attitudinal data that we’ve rolled up into a single score but also allows you to drill down into the what and the why something is happening.

Attitudinal metrics are a composite of ease of use, trust, appearance and loyalty – all fairly industry-standard.

On the behavioral side of the qxScore, we focus on task success – were they able to successfully complete the task on your site, in your app? And that allows us to gauge how effective your users and customers are able to complete core tasks. It’s important to determine when you’re setting up a benchmark what success or value looks like.

If you ask them to go through a process up to a certain point, are they able to reach it? That might be one way to validate. If the task includes finding a particular piece of information or an answer to a question – then that’s a way to measure success.

For more in-depth guidance, watch the full on-demand webinar on proving the value and quantifying UX benchmarking.