Shift the odds of a successful redesign or relaunch in your favour!
If you imagine a development cycle laid out flat, it would probably look like a spring, as ongoing iterations and improvements form something like a never-ending loop. In contrast, a major singular activity such as a website relaunch is likely to be a far more linear process.
In fact, like a good novel, the process of redesigning a website tends to have a beginning, a middle and an end. If your team gets most things right, then with a bit of luck the destination will be worth the journey.
However, failed re-launch projects are all too familiar; they kick off with the best intentions, then muddle along for months and years at great frustration and cost. The story doesn’t always have a happy ending.
So what does ‘good’ look like? And how is it possible to shift the odds of a successful redesign or relaunch in your favour? The first thing to do is define what ‘good’ will look like for you and then plan your path, to help avoid getting lost along the way.
For most customer-facing websites, the KPI of greatest importance to a team will be conversion, followed perhaps by engagement and CSAT (customer satisfaction) or NPS (net promoter score). Each of these may be affected for better or for worse, following a redesign.
Scenario A: Immediate success
This happens very rarely usually only when a business has been seriously under investing for some time in their digital properties.
Scenario B: Who moved the milk?
Facebook are a prime example. Each update garners much initial annoyance. Soon after customers start to benefit and enjoy the new version. It’s just like when the person you live with puts back the milk somewhere different in the fridge.
Scenario C: Something to work on
Initial results of the re-launch show little change or a small fall in KPIs. The root causes may be multiple and it will require further testing of the live site to identify and fix.
Scenario D: Back to the drawing board!
This is worst case scenario where KPIs fall and keep falling. It can be avoided through planning, research and testing.
Scenario B will be the most likely success example for most businesses and it is important that expectations are set correctly so this is what execs are expecting. Many businesses and agencies end up with scenarios C or D, where sales suffer, share prices fall and people start disappearing.
So how do you get to scenario A or B, and avoid C & D? Research, testing, more testing and then further validation. It takes guts to stay true to a process and fend off bad ideas along the way but with the right tools and processes you can do it.
Not what they want but what they need. Find out what your most valuable customers need from your existing website or app, and what they’d need from a new and improved version.
Too many companies think they know their customer without ever truly asking. When conducting formative research, avoid asking just five people, and instead ask 500. Be certain because what you find out will guide the next months of design and development.
If you’re smart you won’t focus on just one but several methods of user research… Focus groups, think-out-loud studies, competitive benchmarking and intercept surveys on your live site/app. Get as much data as you can and you’ll have a better chance of setting out on the right course.
Go big or go home. At this early stage of the design process you can’t afford to rely on the opinions of just a few.
Website navigation has often been identified as the greatest cause of lost sales and dissatisfaction for most of the companies I’ve been lucky enough to work with. The research phase will point you in the right direction on what might work and what to avoid when trying to arrange where to put all your content.
Card sorting will get you closer to the ideal and tree testing will show when you get it right and an expected flow. Both are simple yet effective exercises that will guide if your team.
Don’t restrict these tests to small sample sizes. Nor is it advisable to over represent a single demographic. What one person thinks is good way to group content or find things will not be universal. Small sample sizes yield higher margins for error. In the early stages you can’t afford to get it wrong.
Not every idea is a good one and it’s likely that you will have several agile teams working on different parts of the overall puzzle. Aim to test concepts often and early with mid-sized groups of around 50 participants. Concept studies are fast, relatively cost effective and the data set is large enough to give an acceptable confidence at the early stage.
You don’t have to wait for a working high fidelity model or interactive prototype to test. You can take an early design mock-up and incorporate these with five-second timeout tests, click tests or just use a good old fashioned survey. Use heat maps to aggregate popular click, scroll and cursor movement paths. Each test removes the bad ideas, and iteratively gets you closer to a successful launch.
If you’re A/B testing on the live site then it’s too late. For key process journeys like checkout UX, booking or parcel tracking don’t risk leaving it until launch to test your favoured prototypes. It’s highly advisable to test on the staging site or even before this, if possible. Participant numbers of 100 per test are suitable. Use questions and metrics that relate to the KPIs that will be used to measure the site/apps success when it goes live.
It may not be useful to measure KPIs like NPS at the early stages of concept testing but later on it’s metrics like these that executives understand. If their bonus is dictated by customer satisfaction then measure this on the later prototypes. If it’s conversion then measure likelihood to purchase.
There in no golden rule other than to try and speak the same language as those paying for the project.
Soft or hard launch? Do you go big bang or gradually introduce more and more of your site/app traffic to the new version? Many times this decision is made for you by deadlines and executive needing to show results. Ideally you’d gradually introduce more with clear checkpoints to mitigate a website flop. What are these guidelines?
You need to agree internally what is an acceptable margin of change for KPIs when launching before adding a greater proportion of traffic to the new site. Start with directing 10% of site traffic to the new site/app and track conversion, engagement, task accomplishment rates and customer satisfaction. Now compare these figures to that of the legacy website.
If KPIs fall within the accepted limit then continue moving traffic. If KPIs fluctuate beyond the accepted limit cease transition, fix the problems and then continue.
There are several books, blogs and reports by consultancies, visionaries and research houses that describe the ROI of good UX testing. This is true and the return increases exponentially the earlier in the process testing is completed. Test early and be sure that you are certain your working with good data.