Follow our comprehensive guide to A/B testing to help optimise your site, improve user experience, and increase revenue.
How much impact can one tiny, little feature on a webpage really have?
A whole lot, as it turns out.
Through A/B testing, hotel booking site arenaturist.com found that a vertical form (vs. a horizontal form) had a huge impact on their users, and their conversion rates…
Their aim was to increase submissions on their forms, and by making this small change they certainly managed it – by a huge 52%.
Switching from a horizontal form to a vertical form had a significant impact on users, and massive potential to increase conversions and revenue. Especially when you remember this is just the first step in the booking process.
Want results like these for your business? Follow our guide to A/B testing to help optimise your site, improve user experience, and increase revenue…
A/B testing can be used to compare the performance of a webpage, app, email, or advert by pitting a control, ‘A’, against a test variant, ‘B’.
A/B testing works by randomly splitting your inbound traffic equally between the two versions. The data we capture from each group and their interaction with the variant can help you to make informed, evidence-based decisions about marketing, design, and user experience.
As such, an A/B test can often tell you if a proposed new feature or design change is a good or bad idea before committing to it permanently. This is done by measuring the impact on a given conversion goal, for example clicking or opening a link, completing a form, buying a product, or finishing a checkout process.
Common elements for testing include headlines, text, images, colours, buttons, layouts, and features. However, ideally there should be only one element different between each variation, so if you’re testing the layout then you should not alter the text. By isolating individual elements in this way, you can get a clear representation of their influence within the user journey.
A/B testing ultimately takes the risk and guesswork away from making changes.
Developing a website or app can incur significant investment and may require some change to your overall marketing strategy. If a substantial design change results in a downturn in conversion performance, it can result in wasted investment and could have a negative impact on business performance.
Through A/B testing, this risk is negated. You can target individual features to see the effects of changing them without making any permanent commitments, allowing you to predict long-term outcomes and plan your resources accordingly.
Furthermore, A/B testing can help you understand the features and elements that best improve your user experience. Sometimes, even the most minor changes can have a huge impact on conversion rates.
For example, one test found that a red CTA button gained 21% more clicks than a green CTA button. A/B test data can inform objective decisions and changes to individual webpages, a whole site, or even a wider marketing strategy.
Carrying out A/B tests can have a number of key benefits:
Users are always looking to do something on your website. Often, there are blockers or pain points which hold users back from being able to complete their task. The result is a poor user experience. We can identify these blockers with user research, and then A/B testing is how we find a solution. The data we gather through A/B testing helps us to make decisions that improve the user experience, which we know can have a positive impact on engagement and conversions.
A/B testing allows you to test and increase conversions using your existing traffic – without needing to reach out to new audiences, which can be expensive. Therefore, even a minor change could cause a vast improvement in ROI; A/B testing allows for this with relatively little investment.
Most A/B testing platforms are relatively easy to use and cheap to install. Even the least code-savvy won’t struggle to set up and implement their tests.
A competitor may add or update a certain feature. Through A/B testing, you can determine whether a similar update will be beneficial to user journey and conversions.
Everyone has their own ideas on what will work best on your site. And some people’s opinions can be more influential than others. A/B testing these ideas can provide clear data to support – or disprove – whether ideas and implementations are worthwhile.
Optimising features to improve the user journey will automatically enable your business to focus on the customer. Especially when tests are repeated with features becoming more user-friendly.
Ultimately, A/B testing provides a relatively simple way to analyse and improve the performance of a webpage. With limited risk and cost involved, you can continually optimise content to improve the user journey, experience, and conversions.
User research can uncover issues and pain points within the user journey.
For example, people might be struggling with a menu layout or a form. Your user research could include session recordings, live testing, focus groups, or surveys. You might then ask questions or set tasks to find why people are struggling with the menu, which features they’re struggling with the most etc. You could then use the insights from this research to design a new menu – and A/B test it against the old one to see if performance improves.
Carry out A/B testing if your conversion rate isn’t as you expect it. Use existing data to pinpoint where you’re losing conversions and getting drop-offs. Test features at these points.
Web redesigns can damage traffic and conversion rates – think 404s, lost linkbacks, broken crawlers, and plummeting rankings. User experience can also take a hit. The site might look better, but users may not know how to navigate it as well. A/B tests should be carried out before, during, and after any redesign to make sure the site is as effective and usable as possible.
You could use any A/B data as a basis for some redesigns. And if your redesign doesn’t produce statistically significant results in an A/B test – rethink and optimise your strategy.
A/B testing can be crucial if you make a change to your page that will affect the user journey or purchase point. For example: a poorly-optimised change to a shopping cart or email sign-up page has the potential to lose traffic and conversions. Make sure any changes at these points will enhance the user experience.
An optimised website can improve user experience, and ultimately lead to higher conversions – and so higher revenue. Follow an ongoing A/B testing process to optimise your site continually.
Some features of a page have a more significant impact on users than others. Use A/B testing to optimise influential features within the user journey:
The page headline hooks viewers to stay on the page. It needs to generate interest and engage users. You could test headline variations, such as:
Email subject lines can also undergo similar A/B testing, with a goal for users to open the email. Subject lines with different features can be tested:
Pages should include relevant content in a layout that isn’t cluttered or confusing. You can find optimum content and combination layouts by testing:
Some user research tools include qualitative test features like heatmaps and click maps. These can support A/B test data to find any distractions or dead clicks, i.e. clicks that don’t link to another page.
Your audience will respond better to copy which is optimised specifically for them. You could use testing to find whether your audience responds better to:
CTAs are where the action on a page takes place – and may also be the conversion goal site during a test. An example would be a coloured button that says “click here.” A number of factors can influence user behaviour at this point:
Navigation can be the key to improving user journey, engagement, and conversions. A/B testing can help to optimise site navigation, making sure the site is well structured with each click leading to the desired page. Possible test features include:
Forms require users to provide data to convert. While individual forms depend on their function, you can use A/B testing to optimise:
Images, videos, and audio files can be used differently depending on the customer journey, e.g. product images vs product demonstration videos. A/B testing may determine how media can optimise different aspects of the user journey:
While descriptions are likely to depend heavily on the product, some elements can be optimised, such as:
A/B testing could help determine whether adding social proof to your page is beneficial. You could test the formats and locations of features like:
A/B testing should be treated as an ongoing process with continual rounds of tests, with each test based and built upon data collected previously. Within a single round of testing though, you can use the following framework to start, carry out, and complete a test.
Your existing data can help you find ‘problem’ areas. These could be pages with low conversion or high drop-off rates. Use your own data to inform which features you should test. You could base A/B testing on yours or someone else’s gut instinct and opinion. But tests based on objective data and in-depth hypotheses are more likely to gain valid results.
Generate a valid testing hypothesis – consider how and why a certain variation may be better.
For example, you might want to improve the open rate of emails. So your hypothesis could be: “Research shows users are not responding to impersonal subject lines. If we include the user’s name in the subject line, significantly more users will open our email.”
Testing tools are embedded into your site to help run the test effectively. There are a number of tools available, each having its own pros and cons. Look into each different platform to find out which would be the best choice for your tests and goals.
Decide how you will determine the success of each variation: to click through to a certain page, to buy a product, or fill a form etc. You set this goal within the testing platform, so it knows what to class as a successful conversion.
If the goal is to complete a form, for example, it’s logged once the user reaches a “thanks for your submission” message. Or if the goal was to play a video, it’s logged once the video is viewed. The testing platform will record each conversion and the test variation shown to the user. This will almost always require a snippet of tracking code to be added to one or more webpages.
You can create variations in two ways – usually within your chosen testing tool. Don’t forget to include a control or ‘champion’ page.
Run the test over a pre-set time scale. Don’t cut it short if you see results earlier than planned; the average duration is 10-14 days, but this ultimately depends on your site and its traffic.
It’s crucial to consider and stick to a set sample size. The sample size needed usually depends on the change you expect to see. If you hypothesise a stronger change, you will need fewer users to confirm it.
Also bear in mind that some users may have seen the page before; they may automatically respond differently to users seeing the page for the first time.
If you’re testing emails and their content, you may have more control over the sample and what individual users see. You may need to randomly split the sample, and create and schedule the emails manually.
Analyse your results using your chosen testing tools. Regardless of whether the test had a positive or negative outcome, it’s important to calculate whether the data is statistically significant.
Statistical significance determines whether a relationship between variables is a result of something other than chance. It can inform whether the test results and implications are reliable, and whether they justify making a change to your site.
Generally, running your test for a longer period of time – and allowing more users to be involved – lowers the risk of results being down to chance. Larger samples are often more representative of overall audiences. And so their behaviour can more reliably represent how a whole audience would behave.
While A/B testing software will present quantitative data, you will need to analyse results effectively. And there are a few things to consider when doing so.
A/B testing can collect a lot of data on many aspects of user behaviour. Focusing on your original goal metric is important. Your analysis and results will then align with your original hypothesis and goals.
Statistical significance is the probability that a relationship between variables is a result of something other than chance. It can inform whether the test results and implications are reliable, and whether they justify making a change to your site. Most testing tools include an analysis feature to calculate the statistical significance of data collected.
If one variation proves to be a statistically significant positive change, then implement it. If the results are inconclusive or negative, use this knowledge as a basis for your next tests.
As we said earlier, A/B testing provides quantitative (numerical based) data, but may not reveal the actual reasons why visitors to each page behave the way they do.
UserZoom customers often run a usability study to understand the quantitative data they are receiving from their A/B testing. So they may run a think-out-loud study to probe more deeply their two A/B designs to find out the reasons why one is performing better than another.
A/B testing can carry some challenges. But they can usually be overcome by following an objective and thorough procedure:
Use existing data to determine when – and why – to test a feature. For example, check for pages or links with fewer conversions. Test one element at a time so you can easily pinpoint the influence on users.
Data should be used to see where issues lie, and to formulate objective theories and solutions.
The sample size should be determined before the test runs. People commonly cut testing short because they want quick results. But A/B tests need a sufficient sample size for representative results.
A/B tests should be repeated to ensure pages are continuously improved – this will help optimisation efforts to be effective long-term. Use results and insights from each test to form the basis of the next. Learn from successes and failures. They can indicate how your users behave, and how other features can be optimised.
Try to ignore your own opinion and the results received by others. Focus on the statistical significance of results to make sure data – and any subsequent changes made – are justified.
When planning an A/B test, be sure to consider any external factors eg public holidays or sales that may influence web traffic. You should also research which testing tool is best suited to your needs. Some may slow your site down, or lack the necessary qualitative tools – eg heatmaps and session recordings.
When driven by data and executed objectively, A/B testing can generate great ROI, drive conversions, and improve user experience. Subjective opinion and guesswork is taken out of the optimisation process. So A/B testing can ultimately inform strategic decisions across your marketing efforts, driven purely by data.
If decisions and tests are carried out randomly as based on opinion – they’ll probably fail. Start the testing process off the back of clear data, a strong webpage, and a controlled process. It’s the best way to start a cycle of effective A/B tests – and a great first step to a well-optimised site and user journey.