For businesses looking to provide a better user experience and improve conversion rates, there are a range of methods to use, including voice of customer surveys, analysis of customer journeys, basket abandonment emails, and testing.
I’m not going to say that one method is better than the other, as the truth is that the smartest marketers will be using a blend of these techniques to find the best results for their own website.
However, this article will look at testing, which is the basis of much work around conversion rate optimisation. Testing is vital as it allows businesses to see their ideas and theories put into practice, providing clear quantifiable evidence on what changes affect the behaviour of users on your site.
Testing should remove guesswork from the equation. While many people may have ideas about how a website should look and which features are important, testing provides proof on what does and doesn’t work.
I asked several CRO practitioners and retailers for their views and how they generate ideas for testing.
You need to find a starting point for testing, and this can be where other CRO methods come into play. For example, analytics may help you to identify a page with higher than normal drop-out rates, or customers may provide feedback that points at an issue with checkout forms.
Armed with this information, you can observe users on the site to gain more insight, or carry out A/B tests to try out solutions.
Stuart McMillan, Deputy Head of Ecommerce at Schuh, looks at three main sources for tests:
Ideas can come from various sources within the company too. Sean Collins is Head of CRM at Mr & Mrs Smith, and regularly asks everyone in the company for ideas:
“The key is to then make an open prioritization session out of all the ideas so you pick the best ideas, not just the high profile ones. And say thank you and credit the person who identified it”
Paul Rouke, Founder and CEO at PRWD, also recommends using internal sources, such as ensuring that customer service teams are capturing and grouping feedback.
Another route is to analyse completed tests and session recordings in detail to identify other areas to improve, as well as new variations of completed tests.
As Orangeclaw’s Chris Lake points out, there are lots of ideas out there already: “Ideas can come from site data, user research, customer feedback, team suggestions, competitor benchmarking, research, blog posts, events, and so on. I have a database of around 1,000 ideas for testing, which I cross-reference when analysing sites.”
UX and CRO are closely related, in that both are often looking to achieve the same goal, of improving site performance. There may be some conflicts as the end goals differ, but providing the best possible user experience often suits the best interests of both users and retailers.
As with improving usability, there’s always unfinished business with CRO.
You may have a great site, generating lots of sales, but that should be viewed as a temporary state of affairs – there are always ways to improve, and a need to keep up with the competition.
Testing should be part of this continuous optimization process, whether it’s user or A/B testing, or preferably both.
So, there’s a need for continuity but there is still a question of how many tests you need to generate reliable insights that you can rely on, while our contributors add the important caveat that quality should be prioritized.
Stuart McMillan emphasises the need for statistical significance:
“This is a very rough rule of thumb but, look at the number of conversion events on your website per month, divide by 2,000 and that’ll roughly be the number of A/B tests you can run in a month. Why 2,000? Well, assuming a 50/50 split, that should be enough to either get statistical significance or to be fairly sure that running it for longer won’t improve the statistical significance.”
There’s another important, and slightly separate point here – testing is just about producing wins. As Stuart says, a failed test is one where you can’t trust the data, not one where a favoured variant didn’t win, or the test was inconclusive.
“If your variant wins; great you’ve got some new functionality that will make you money. But what if it lost and the control won? Well, the test was still interesting: why did this new fancy design which is supposed to be better for customers not actually make it better? What if it was a draw and they both had the same effect? That is also interesting; why are two quite different designs functionally equivalent to customers?”
Then there’s the vital issue of putting quality first, a point well made by Paul Rouke:
Before even thinking about how often you should carry out tests, put quality first – quality of the research, quality of the data analysis, quality of the hypothesis, quality of the UX design, quality of the copywriting. Once you establish quality as the foundation of your A/B testing efforts, then quantity of testing becomes a consideration. It’s the difference between sanity and vanity metrics in conversion optimization.
Businesses that take CRO and UX seriously enough will allow their strategy to be driven by customer insight.
Quantity of testing events should be secondary to quality. The quality of data used for analysis, the reliability of the hypothesis, the design and more all matter far more.