In 2003, the British Design Council set about creating a simple way to describe the design process.
They deconstructed the steps they encountered in a number of successful projects and they started to see similarities and patterns emerge. This enabled them to map out what they believed was a universal design process.
In 2004 they announced the Double Diamond. It comprises four distinct phases that the team, looking for an effective mnemonic, referred to as Discover, Define, Develop and Deliver. The shape of the double diamond diagram represents the repeating pattern of divergent and convergent thinking that is required across these four phases.
The Double Diamond process, in UX research and design.
The Double Diamond process went on to become universally accepted. And while variations have inevitable appeared, they all have essentially the same ingredients:
There is an initial phase of divergent thinking where researchers try to put aside what they already know about their product idea, and simply ask questions. There is a period of convergent thinking where new data is analyzed, and distilled, to create common themes and insights. There is another divergent phase while potential solutions are considered. And, finally, there is convergence around a single design.
During the discovery phase researchers need to ask open-ended questions, such as, “Is there a real customer requirement? What problem is our new product solving? And what is its market potential?”
The most popular methods during this phase tend to be those that allow researchers to collect qualitative data, such as Desirability Study, Diary Study, Ethnographic Field Study, Focus Group, Stakeholder Interviews, Lab Study, and Customer Surveys.
Research methods can be either moderated (with a UX professional facilitating the study) or unmoderated (where the participants take part on their own).
They can also be quantitative (generating numerical data including sometimes an overall usability score) or qualitative (where the output is more descriptive).
Desirability studies help identify and define specific qualities of a product or brand. Participants are shown the product (this could be a live site, a prototype, or even just a selection of marketing copy or images). They are then asked to describe what they see using a list of pre-selected words. This exercise provides qualitative or quantitative data about whether the different aspects of a product are “awesome”, “average” or just plain “weird.”
Diary studies gather information about a participant’s experience of a product or service over an extended period of time. Participants write about their experiences in a diary-style format. They may also take photos or perform other activities to record their experiences.
Diary studies are useful for understanding long-term behavior on the researcher’s own product or service, or on a competitor’s. They are thought to be reliable because they remove the influence of the researcher and the unnatural out-of-home setting.
A diary study helps the researcher understand:
Ethnographic studies involve talking with people and watching them perform tasks in a natural context. The aim is to gather information not just on how people interact but on how location, environment, and other contexts affect their day-to-day behaviors.
UX professionals use ethnographic research to gain a better understanding of their users and the different ways they interact with a product or technology.
A group of 6-10 participants from a specific target market are gathered together with a moderator. A potential product may be described or shown, or an actual product or market sector discussed. Typical questions the moderator might ask are:
The group’s thoughts and feelings are then collated and may be used to inform the future direction of the product.
Similar to Focus Groups above except that participants meet with the researcher in a one-on-one interview setting to discuss a potential product or service.
In a traditional lab-based study, between 6-10 participants are brought into a ‘lab’ environment to run through a series of tasks. Participants are assigned tasks and asked to
perform them on a pre-configured computer or mobile device. While doing so, they are observed by a person sitting next to them or via a monitor or through a one-way mirror.
In order to gauge the time taken for a given task, participants might be encouraged to walk through their tasks without interruption. In this instance, detailed questions are saved for after the task, or even after the study.
Alternatively, if the ‘think aloud’ protocol is being used, participants are asked to express their thoughts out loud while actually working on a task. With this protocol, the researcher could come back immediately with any follow-up questions while the user’s impressions are still fresh.
Online surveys can be triggered in a number of different ways, but what they all have in common is that they provide qualitative feedback from users and also allow researchers to calculate quantitative metrics. Users can be intercepted directly from a website or app, and using advanced survey capabilities such as logic, conditioning and branching, they can be steered quickly and elegantly through only the questions most appropriate to them.
NB Typically, surveys are quantitative because they are asking rating-scale type questions, and the goal is to get robust data to ensure UX decisions are backed up by hard data. But surveys can also generate qualitative insights from open-ended questions.
During the definition phase, the data collected during the discovery phase is analyzed to surface common themes and to generate fresh insights.
New assumptions are tested using A/B Testing, Card Sort and Usability Testing.
In A/B testing the participant is shown two alternative versions of the same page or feature, and asked to choose the one that they like best. For instance, if you are trying to decide on the best text for the ‘buy’ button on a shopping page, you might use an A/B test to show half of your traffic a button that says ‘add to cart’ and the other half a button that says ’buy now’. This process can be repeated for the key elements of every page.
Being a quantitative test, A/B testing will reveal the preferred version of every element but not necessarily what the tester actually feels about it (they might not particularly like either version!).
In card sorting, participants are presented with a list of items (for example, all the products featured in an online supermarket) and asked to group them in a way that makes the most logical sense to them. Depending on the type of card sorting, participants can also choose (or come up with) a name for the group of items they’ve put together.
There are three types of card sorting:
The popular image of card sorting is of groups of participants sticking post-it notes on whiteboards, but it can be very time-consuming to collect, collate and analyze these hundreds of bits of paper. Digital card sorting is a much more efficient method of collecting the same information.
Tree testing is typically used in combination with card sorting to test your proposals from your card sorting analysis.
In tree testing, the main categories and subcategories for a website are already established or proposed.
Participants are asked to explore these categories in order to find a particular item or piece of content. They click through the various links until they find the category where they expect to find the item. Looking at task effectiveness and where people get lost in the structure can help to refine your menu.
Similar to lab testing, except the participants are working remotely, remote moderated usability testing (also referred to as online moderated research) is a collaboration between a moderator and study participants using screen share technology and an audio bridge.
When carrying out a study the testing software collects the quantitative and/or qualitative data as the participants work through tasks. The moderator is there to guide participants through the tasks, ask questions, respond to their questions and gather feedback.
The remote nature of the tests allows researchers to spend more time with each subject, perhaps over several sessions. More data can be collected this way, making the most of the participants’ time. Researchers are able to collect and triangulate different kinds of data and to combine different kinds of methodologies within a single study.
During the design and development phase, UX researchers continue to iterate the tests used in the discovery and define phases to collect qualitative data about the product as it evolves. But with the emergence of a concept or a prototype design (or designs) there is an increasing emphasis on real-world usability testing.
In this phase, A/B Testing, Click Testing, Concept Testing, Eye Tracking, Information Architecture, Participatory Design, and Usability Testing come into play.
Click tests are a simple way to check whether users know where to click to perform specific actions on a proposed site or application. The researcher typically uses static images to create mock-ups of the test pages and asks participants questions such as, “Where would you click to access the shopping cart?”.
This method can be used to validate anything from a full product design, a wireframe prototype, or a sketch on the back of an envelope.
Concepts or prototypes are inexpensive previews of a design that allow the team to test how well the real product might work. For product team members, prototypes promote discussion and understanding. UXers can test prototypes with users and get early feedback to avoid sinking valuable resources into a bad design.
In the long run, prototyping saves time, money, and effort. Rather than testing later on when a product or feature is built and nearly shipped, the modern agile method is to test early and often.
To get the most out of prototype testing, it’s important to remind participants that the prototype is not the finished product. Encourage them to look past any unpolished features or elements that are missing or incomplete.
Concept testing can be done on a one-to-one basis or with a larger number of participants, either in person or online.
Eye-tracking lets UX researchers see precisely where participants look on a screen when performing a task. It requires a special piece of equipment that tracks the user’s eyes as they look around the screen, and this information generates a heatmap of where on the page the user is concentrating at any given time. This, in turn, helps determine whether information and key navigation items are where users expect them to be.
Information Architecture (IA) refers to the way that content is presented and accessed – whether through menus, breadcrumbs, categories, links or any other device that takes a user from one page to another.
Information architecture testing helps researchers define navigation, improve information taxonomy and maximize the findability of pages across a website. This is usually explored through a combination of card sorting and tree testing.
Target customers of a product, service, or experience are recruited to take an active role in co-designing it.
Usability testing explores the interaction of participants with a prototype, website, app or other digital product. It captures their actions, behaviors, feedback and/or spoken-aloud impressions.
Typically, it is done remotely, on the participant’s own personal devices, or in a dedicated UX research lab.
Remote unmoderated usability testing (also known as a remote panel study) is similar to a lab study except the participants are working remotely and without a moderator. This has significant implications for the cost of repeat, or multiple, studies.
Remote Unmoderated Usability Testing has been shown to provide statistically valid quantitative data and to give rapid access to qualitative and behavioral data.
During the delivery phase, all of the previous methods are still being employed to fine-tune the product, right up to, and even after, its launch. When the prototype is at an advanced or even completed, stage, testing can be done with actual customers performing real tasks on a functional product.
Methods such as Clickstream Analysis, Customer Feedback /Voice of the Customer, Intercept Study, True-Intent Study, and UX Benchmarking can be important during this phase.
Clickstreams are a record of the aggregated paths (URLs) followed by users during their navigation to a web page or pages.
Clickstreams allow researchers to view and analyze the paths that users took while performing specific tasks, showing what proportion of users followed each path, and the final outcomes (completed purchase or task, error, abandonment, or timeout).
Some software incorporates heatmaps (showing an aggregation of the areas where users clicked on each page) to allow more detailed behavioral analysis.
Typically, clickstreams are used to help researchers evaluate the sales funnel and calculate the respective conversion rate of different journeys.
A Customer Feedback or VoC study is an attempt to collect the ‘true’ opinions of participants when they visit a site that is already live. These studies are an important part of ongoing research after a site or feature is launched. By being ‘always on’ they gather feedback constantly in the background. So any problems that weren’t picked up before launch can be corrected, and customer responses to any future changes will be reflected in the data.
Typically, the study will take the form of a short survey. Visitors are invited to participate as they arrive at the site’s homepage or other target pages. Before they fill out the survey, they are asked to continue doing what they came to do on the site. Users spend as little or as long as they wish to accomplish their goal. Then they can start the questionnaire.
During the survey, the participants would be asked a series of qualitative questions, such as:
Data collected can help companies identify pain points for their customers, and fix them immediately. It also informs researchers about the overall health of a site and allows them to segment visitors and create and/or flesh out user personas.
Other ways that UX researchers can gather VoC data:
Key benefits of a VoC study:
Note, that these studies tend to create very large sample sizes and can take a lot of time to process.
This could be a general-purpose survey, such as a Customer Feedback survey, but sometimes an automated survey might be triggered to ask a single question with a very specific objective.
For example, whenever a user exits the checkout page without completing a transaction, the Intercept Survey might want to ask “what prevented you from completing your purchase just now?”.
An intercept can be a very direct, even abrupt, method of speaking to a customer so it should be handled sensitively. It’s important not to interrupt customers while they are actually using the site effectively, especially if they are in the process of buying something.
And in the above example, best practice might be to offer assistance as well as requesting information about the buying experience. So, for example:
“Sorry, you weren’t able to complete your order. We’d be happy to assist you and will offer you a discount for your inconvenience. Would you like to chat to an assistant now?”
As with VoC surveys above, intercept surveys can reveal how visitors use a site or a service, whether they have been able to achieve their goals and whether they are satisfied with the overall experience provided.
True intent studies ask organic visitors on a website, mobile site or mobile app what their intentions are and get actionable information about their experience.
True intent is a cost-effective research approach allowing researchers to get answers to qualitative questions like: Who is coming to the site? Why are they visiting? Did they get what they needed? How are they using the site? How do they feel about it?
UX benchmarks allow you to measure the baseline performance of digital products and services and measure how changes are moving the UX needle over time. Typical benchmark studies are either longitudinal, in which measurements are continuously made against the same baseline, or competitive, in which products are measured periodically against competitors’ offerings.
With benchmarking, participants carry out the same tasks on a variety of websites in order to determine:
In order to score different sites, various measurements can be combined. Collecting both behavioral data (such as task success, task time, page views) and attitudinal data (such as ease of use, trust, and appearance) gives a sophisticated measure of relative standing.
For a description of the most popular UX research tools, and when they are applied, see the next chapter, Which UX research tools?