We live in a Key Performance Indicator (KPI) world.

Most executives are bombarded with KPIs and dashboards on a daily basis, so good leaders have learned to focus on a few KPIs that really matter. It’s all about figuring out which needles need to be moved and what moves those needles.

Below is a list of KPIs that are used across the User Experience world. This is not an exhaustive list, but these are common across different types of Industries. A more extensive list of KPIs is documented on the CX Partners website.

Typical KPIs

Actual Behavior from Live Site / App (Analytics): Analytics platforms provide a range of KPIs but all of them need a live website and are extremely hard to capture for a competitor’s website or app. These include Page Views, Conversion Rate, Bounce Rate and many more.

Sales & Marketing: Number of Leads, Social Media Impressions, Brand Trackers, Third party, Email Campaigns, Search Metrics, Satisfaction, NPS.

Financial Metrics: Revenue, Margins, Average Annual Contract, Average Sales Price, Lifetime Value, Per customer acquisition cost, Churn Rate, Active Users, etc.

Customer Support Metrics: # of calls, First call resolution, Average time, Inbound vs outbound.

Top Usability and UX KPIs

We work with Fortune 500 companies across all industries and have noticed certain KPIs that are most commonly used for benchmarking (either over a period of time or compared against competitors). We broadly divide them into two categories:

Behavior (what they do)

In the User Research world it’s critical to understand what people are doing, and how they are using your products. Task based usability testing is a standard method to gather this information across the industry. I don’t mean just ‘in-lab’ think-out-loud studies, but also remote unmoderated studies to get to large sample sizes in an efficient way.

Attitude (what they say)

How users feel, what they say before, during or after using a product, and how this affects brand perceptions is critical.

Behavioral (what they do) UX KPIs

Task Success

Usually represented as a %. Typically, a group of representative users are given a set of realistic tasks with a clear definition of task success. Examples of task success could be: Reached a specific page in check out flow/ registration flow, found the right answer on a marketing website or reached a step in mobile app. Having a clear definition of success and/or failure is critical.

If 8 out of 10 users completed the task successfully and 2 failed, then ‘Task Success’ would be 80%. Because of the small sample size of 10, the ‘Margin of Error’ at 90% Confidence Level would be about +-25. This means that we are 90% confident that the ‘Task Success’ rate falls somewhere between 55% to 100%.

But if 80 out of 100 users completed the given task successfully, then ‘Task Success’ rate would still be 80%, but with a ‘Margin of Error’ of about 8%. Generally speaking, this means that we are 90% confident that the ‘Task Success’ Rate falls somewhere between 72% to 88%. The larger the sample size, smaller the ‘Margin of Error’.

Task Time

Usually an absolute number. For example: 3 mins. For most task based studies, where the user goal is to get something done as efficiently as possible, shorter task times are better. There are exceptions, though: If the goal is keep the user more engaged, such as staying on Facebook’s News Feed, then longer Task Times could be better. It really depends on what the task is. Even on Facebook’s news feed – if the goal is to find a specific event then shorter task times might be a better outcome.

Organizations can look at either the Average Task Times for only those who were successful, or they can look at the Average Task Times for all users.

Page Views, Clicks (or Taps)

Website page views and clicks are a common KPI. For mobile apps, or even web applications or even single page web apps, some combination of clicks, taps, number of screens (refresh) or steps can be measured.

If you are running an in-lab study, counting these can be extremely tedious. But, if you are using tools like UserZoom, most of these metrics are captured automatically and significantly reduce analysis and reporting time. In most cases, combining these, or at least connecting them to Analytics data (from the live site or apps) is beneficial.

Problems & Frustrations

These can be measured as ‘Number of unique problems’ identified and/or Number (or %) of participants that encounter a certain problem. We recommend conducting Think-Out Loud studies to identify problems, and then quantify them via a large-sample study to find the % of problems actually encountered by a large population (with confidence-intervals).

Most of these Behavioral KPIs are collected ‘per task’ and then aggregated as an average for a given study, and/or digital product. These are then compared over a period of time (e.g. each quarter) or compared with competitors’ digital products.

Attitudinal (what they say) UX KPIs

There are several attitudinal UX KPIs: Net Promotor Score (NPS), System Usability Scale (SUS), SUPR-Q (pronounced SuperQ), Customer Satisfaction (CSAT) and more.

SUPR-Q provides key KPIs for websites. SUS is a common way to get system level attitudinal KPIs for Web Applications or Mobile Apps.

Below are the four key SUPR-Q categories:

Usability

Two questions on a 5-point scale:

  • This website is easy to use.
  • It is easy to navigate within the website.

Credibility (Trust, Value & Comfort)

Two questions on a 5-point scale:

  • The information on the website is credible.
  • The information on the website is trustworthy.

There are also alternate options for ecommerce websites:

  • I feel comfortable purchasing from this website.
  • I feel confident conducting business with this website.

Loyalty

The Net Promoter Score question “likely to recommend” is on an 11-point scale, while “future intent to visit” is on a 5-point scale:

  • How likely are you to recommend this website to a friend or colleague?
  • I will likely visit this website in the future.

Appearance

Two questions on a 5-point scale:

  • I found the website to be attractive.
  • The website has a clean and simple presentation.

The scoring is straightforward and easy to understand. Average the responses for the first 7 questions and add this to 1/2 of the score for the Likelihood to Recommend (NPS) question. Optionally, your score can be compared to industry benchmarks from over 150 websites and thousands of users. You can read details about SUPR-Q on www.suprq.com.

Most Attitudinal KPIs (like SUPR-Q, SUS) are at the product, software or system level and asked towards the end of a task-based usability study. As mentioned before, Behavioral KPIs are collected ‘per task’ and then aggregated.

Conclusion

Quantifying UX, Benchmarking KPIs and/or comparing them with competitors is an important step in understanding which business needles need to be moved, and also how to move them. This is why most Executives are now looking for ways to benchmark their digital product’s UX. The good news is that it is now easier than ever to get these benchmarks.

Watch our webinar with Jeff Sauro to learn more about Benchmarking Best Practices.