UX Insights are supercharging A/B Testing programs!
It used to be that people running A/B Testing programs would boast that they had hundreds of live experiments running at any one time. People now understand that the real measure that matters is the effectiveness of those experiments, not how many of them there are.
In this article we will examine how UX Insights are supercharging A/B Testing programs by drawing on real-world examples from UserZoom’s customers, along with five best practice how-tos:
A quick note that when we mention A/B Testing for the rest of this article we mean both split and Multivariate Testing (MVT).
It’s prudent to use multiple sources of insight to prioritize what to A/B Test next. But, in reality, too many revert to their hunches to decide.
On the face of it, prioritizing A/B Tests should be straightforward. A search on Google reveals how CRO Practitioners recommend this is approached; often boiling down to a combination of using data to identify those highest value pages that are the easiest to change. But, in reality, many teams tend to rely on hunches – especially those of influential executives – to prioritize their test plan.
This can lead to a plateauing of results after some initial wins (where the first “no-brainer” hunches proved right) because perceived rather than actual customer pain points are being tackled.
A straightforward way to avoid this is to identify actual customer struggle by observing target customers on the key journeys through UX testing. Insight from this testing helps teams in three ways:
Real-world Example
A high-end women’s clothes retailer had been using an A/B testing solution for more than 12 months. Their first few A/B Tests, designed to address the hunches of the ecommerce team, resulted in decent uplifts (peaking at 5%). The success of the initial tests lead them to continue to rely on their own hunches to prioritize future A/B Tests. Over the following months the results were less impressive, even though the volume of testing increased.
Then, following two rounds of cross-device UX testing on their key journeys, they re-prioritized their test plan to address the points of actual customer struggle that the testing revealed such as:
Left to their own hunches, the team would never have prioritized running A/B Tests in these areas and their experiment results would continue to plateau.
Insight from UX testing helps optimization teams develop robust root-cause hypothesis so that the test variants they design address an underlying issue that they understand.
If teams do not understand the root-cause of a conversion problem, they are often tempted to rely on guesswork or best practice to design variants for A/B Tests. This can limit the overall success of an A/B Testing program, and can even lead to false positives where an uplift is stumbled upon without the (more lucrative) underlying issue being addressed.
Real World Example
One online retailer with an increasing bounce rate on product category landing pages surmised that the lack of product filtering options was causing users to abandon. The team then developed design variants based on this hypothesis that had more granular filtering and ran A/B Tests – leading to some improvement, but not addressing the root-cause of the increasing bounce rates.
By observing customers landing on product category pages in a round of UX testing, the team quickly identified the root cause of the abandonment. In this case it was sorting options (rather than filtering) that was the actual root cause. The team then successfully reduced bounce rates in another round of A/B Testing that addressed an actual root cause.
Even with robust root-cause hypotheses, the success of any A/B Test is dependent on the quality of the design variants – how well do they address the root cause problem?
An easy way to improve the quality of variants is for teams to gather UX Insight on mock-ups or prototypes and improve them during the design phase, before they are A/B Tested. There’s no need to wait for the finished designs and testing can be undertaken rapidly with the design team iterating on the test results.
Being confident that the design variants are of the best achievable quality maximizes the likelihood of A/B Testing success.
Real-world Example
An online electrical goods retailer identified that the absence of videos on their product pages was limiting the conversion opportunity. As they designed a “B” product page that included manufacturer product videos they ran UX tests to validate the variant quality with customers before running live A/B Tests.
The testing revealed that the manufacturers’ videos (highly stylized TV adverts) were of little value to users who wanted to see products in context. The retailer then developed a variant with videos that showed the product in a real environment (e.g. a kitchen) and used these style of videos in the A/B Tests.
This resulted in a dramatic 8% uplift in online sales – an improvement that would not have been achieved had UX testing not revealed why the team’s initial design was sub-optimal.
Some complex conversion opportunities require a greater depth of insight, before they can even be considered for A/B Testing.
Redesigning a Menu structure is the most common example of where more extensive UX testing is prudent. Menus can be complex, and extensive – running Card Sorting and Tree Tests ahead of any live testing can save teams from designing and running what can prove to be very complex A/B Tests.
There are times when running A/B Tests, or to be more precise, many A/B Tests in the wild is simply not feasible. This often applies when compliance or consistency of experience is important. For example, the logged in account area for a Bank or Utility Company.
Real-world Example
British Gas wanted to improve their online bills for customers. They developed four variants into visual prototypes. But, knowing that call centre staff would struggle to answer customer queries if they first had to deduce which variant was being served (had all four been tested in the Wild) they ran an extensive round of UX testing to determine a single variant to A/B Test. After gathering insight from over 1,000 customers, British Gas achieved a significant improvement with the winning variant from UX testing.
Implementing an onsite qualitative survey can help identify why a variant has won, so the team can learn.
Even with the most extensive UX testing program in place, when a variant outperforms others it’s useful for CRO and product teams to understand why. Understanding why can lead to a deeper understanding of customers as well as prompt ideas for future experiments.
Real-world Example
A UserZoom Financial Services customer implemented an intercept survey that triggered to a small percentage of visitors on an A/B Testing experiment of their car insurance “brochure” pages. By understanding why users preferred the winning variant the team could repeat its success on other insurance types.
These best practice recommendations demonstrate how embedding UX testing can make A/B Testing programs more successful and efficient because team decision making is improved through insight – eradicating hunches and involving customers at every stage.