Introduction

In 2003, the British Design Council set about creating a simple way to describe the design process.

They deconstructed the steps they encountered in a number of successful projects and they started to see similarities and patterns emerge. This enabled them to map out what they believed was a universal design process. 

In 2004 they announced the Double Diamond. It comprises four distinct phases that the team, looking for an effective mnemonic, referred to as Discover, Define, Develop and Deliver. The shape of the double diamond diagram represents the repeating pattern of divergent and convergent thinking that is required across these four phases. 

The Double Diamond process, in UX research and design.

The four Ds of the design process

  • Discover – Start by asking questions of stakeholders such as “What are our users’ needs?”
  • Define - Look for patterns in the research findings to create a design brief that defines the key aspects of the solution. 
  • Develop - Ideating, designing, developing, testing, and refining multiple potential solutions. 
  • Deliver – Select the single best solution, through testing, and refine after launch. 

The Double Diamond process went on to become universally accepted. And while variations have inevitable appeared, they all have essentially the same ingredients:

There is an initial phase of divergent thinking where researchers try to put aside what they already know about their product idea, and simply ask questions. There is a period of convergent thinking where new data is analyzed, and distilled, to create common themes and insights. There is another divergent phase while potential solutions are considered. And, finally, there is convergence around a single design.

Which UX research methods are best during the discovery phase?

During the discovery phase researchers need to ask open-ended questions, such as, “Is there a real customer requirement? What problem is our new product solving? And what is its market potential?”

The most popular methods during this phase tend to be those that allow researchers to collect qualitative data, such as Desirability Study, Diary Study, Ethnographic Field Study, Focus Group, Stakeholder Interviews, Lab Study, and Customer Surveys.

Research methods can be either moderated (with a UX professional facilitating the study) or unmoderated (where the participants take part on their own).

They can also be quantitative (generating numerical data including sometimes an overall usability score) or qualitative (where the output is more descriptive). 

Desirability Study (Quantitative or Qualitative / Moderated or Unmoderated)

Desirability studies help identify and define specific qualities of a product or brand. Participants are shown the product (this could be a live site, a prototype, or even just a selection of marketing copy or images).  They are then asked to describe what they see using a list of pre-selected words. This exercise provides qualitative or quantitative data about whether the different aspects of a product are “awesome”, “average” or just plain “weird.”

Diary/Camera Study (Qualitative / Unmoderated)

Diary studies gather information about a participant’s experience of a product or service over an extended period of time. Participants write about their experiences in a diary-style format. They may also take photos or perform other activities to record their experiences. 

Diary studies are useful for understanding long-term behavior on the researcher’s own product or service, or on a competitor’s. They are thought to be reliable because they remove the influence of the researcher and the unnatural out-of-home setting. 

A diary study helps the researcher understand:

  • What time of day users engage with a product or service
  • How users share content with others
  • What capacity do users engage with a product
  • Users’ primary tasks
  • User workflows for completing longer-term tasks
  • Motivation to perform specific tasks
  • How learnable a system is
  • Customer loyalty over time
  • Brand perception after engaging with the product or service
  • Typical customer journey and cross-channel user experience as customers interact with the service using different devices and channels
  • The cumulative effect of multiple service touch points.

Ethnographic Field Study (Qualitative / Moderated)

Ethnographic studies involve talking with people and watching them perform tasks in a natural context. The aim is to gather information not just on how people interact but on how location, environment, and other contexts affect their day-to-day behaviors. 

UX professionals use ethnographic research to gain a better understanding of their users and the different ways they interact with a product or technology.

Focus Groups (Qualitative / Moderated)

A group of 6-10 participants from a specific target market are gathered together with a moderator. A potential product may be described or shown, or an actual product or market sector discussed. Typical questions the moderator might ask are:

  • What do you feel about this product/service?
  • What do you feel about other products/services in this space?
  • When was the last time you made a purchase?
  • How often do you use that/our product/service?
  • If you could change one thing about it, what would it be?
  • What kind of problems have you experienced?
  • What are its strengths and weaknesses?
  • What companies/products are the best in market?
  • What do you like most about them?
  • Is there anything else to consider?

The group’s thoughts and feelings are then collated and may be used to inform the future direction of the product.

Interviews (Qualitative / Moderated)

Similar to Focus Groups above except that participants meet with the researcher in a one-on-one interview setting to discuss a potential product or service. 

Lab Study (Quantitative or Qualitative / Moderated)

In a traditional lab-based study, between 6-10 participants are brought into a ‘lab’ environment to run through a series of tasks. Participants are assigned tasks and asked to 

perform them on a pre-configured computer or mobile device. While doing so, they are observed by a person sitting next to them or via a monitor or through a one-way mirror. 

In order to gauge the time taken for a given task, participants might be encouraged to walk through their tasks without interruption. In this instance, detailed questions are saved for after the task, or even after the study.

Alternatively, if the ‘think aloud’ protocol is being used, participants are asked to express their thoughts out loud while actually working on a task. With this protocol, the researcher could come back immediately with any follow-up questions while the user’s impressions are still fresh. 

Surveys (or Questionnaires) (Quantitative or qualitative / Unmoderated)

Online surveys can be triggered in a number of different ways, but what they all have in common is that they provide qualitative feedback from users and also allow researchers to calculate quantitative metrics. Users can be intercepted directly from a website or app, and using advanced survey capabilities such as logic, conditioning and branching, they can be steered quickly and elegantly through only the questions most appropriate to them. 

NB Typically, surveys are quantitative because they are asking rating-scale type questions, and the goal is to get robust data to ensure UX decisions are backed up by hard data. But surveys can also generate qualitative insights from open-ended questions.

Which UX research methods are used during the definition phase?

During the definition phase, the data collected during the discovery phase is analyzed to surface common themes and to generate fresh insights.

New assumptions are tested using A/B Testing, Card Sort and Usability Testing. 

A/B Testing (Quantitative / Unmoderated only)

In A/B testing the participant is shown two alternative versions of the same page or feature, and asked to choose the one that they like best. For instance, if you are trying to decide on the best text for the ‘buy’ button on a shopping page, you might use an A/B test to show half of your traffic a button that says ‘add to cart’ and the other half a button that says ’buy now’. This process can be repeated for the key elements of every page.

Being a quantitative test, A/B testing will reveal the preferred version of every element but not necessarily what the tester actually feels about it (they might not particularly like either version!).

Card Sorting (Quantitative / Moderated or Unmoderated)

In card sorting, participants are presented with a list of items (for example, all the products featured in an online supermarket) and asked to group them in a way that makes the most logical sense to them. Depending on the type of card sorting, participants can also choose (or come up with) a name for the group of items they’ve put together. 

There are three types of card sorting:

  • Open card sorting: Participants are asked to group cards into categories that make sense to them, and then label each category in a way that they feel accurately describes the content.
  • Closed card sorting: Participants sort cards into groups that have already been defined. This might be used, for example, in an online store that’s launching new products or services that might fit into a variety of different categories.
  • Hybrid card sorting: This is a mixture of open and closed sorting. Participants can sort cards into categories that are already defined or create their own if they don’t think the ones they presented are sufficiently accurate.

The popular image of card sorting is of groups of participants sticking post-it notes on whiteboards, but it can be very time-consuming to collect, collate and analyze these hundreds of bits of paper. Digital card sorting is a much more efficient method of collecting the same information. 

Tree Testing (Quantitative / Unmoderated)

Tree testing is typically used in combination with card sorting to test your proposals from your card sorting analysis. 

In tree testing, the main categories and subcategories for a website are already established or proposed.

Participants are asked to explore these categories in order to find a particular item or piece of content. They click through the various links until they find the category where they expect to find the item. Looking at task effectiveness and where people get lost in the structure can help to refine your menu.

Remote Moderated Usability Testing (Quantitative or Qualitative / Moderated)

Similar to lab testing, except the participants are working remotely, remote moderated usability testing (also referred to as online moderated research) is a collaboration between a moderator and study participants using screen share technology and an audio bridge.

When carrying out a study the testing software collects the quantitative and/or qualitative data as the participants work through tasks. The moderator is there to guide participants through the tasks, ask questions, respond to their questions and gather feedback.

The remote nature of the tests allows researchers to spend more time with each subject, perhaps over several sessions. More data can be collected this way, making the most of the participants’ time. Researchers are able to collect and triangulate different kinds of data and to combine different kinds of methodologies within a single study. 

Which UX research methods are best for the design and development phase?

During the design and development phase, UX researchers continue to iterate the tests used in the discovery and define phases to collect qualitative data about the product as it evolves. But with the emergence of a concept or a prototype design (or designs) there is an increasing emphasis on real-world usability testing.

In this phase, A/B Testing, Click Testing, Concept Testing, Eye Tracking, Information Architecture, Participatory Design, and Usability Testing come into play. 

Click Tests (Quantitative / Unmoderated)

Click tests are a simple way to check whether users know where to click to perform specific actions on a proposed site or application. The researcher typically uses static images to create mock-ups of the test pages and asks participants questions such as, “Where would you click to access the shopping cart?”.

This method can be used to validate anything from a full product design, a wireframe prototype, or a sketch on the back of an envelope.

Concept Testing or Prototype Testing (Quantitative or Qualitative / Moderated or Unmoderated)

Concepts or prototypes are inexpensive previews of a design that allow the team to test how well the real product might work. For product team members, prototypes promote discussion and understanding. UXers can test prototypes with users and get early feedback to avoid sinking valuable resources into a bad design. 

In the long run, prototyping saves time, money, and effort. Rather than testing later on when a product or feature is built and nearly shipped, the modern agile method is to test early and often.

To get the most out of prototype testing, it’s important to remind participants that the prototype is not the finished product. Encourage them to look past any unpolished features or elements that are missing or incomplete.

Concept testing can be done on a one-to-one basis or with a larger number of participants, either in person or online.

Eye-tracking (Quantitative / Unmoderated)

Eye-tracking lets UX researchers see precisely where participants look on a screen when performing a task. It requires a special piece of equipment that tracks the user’s eyes as they look around the screen, and this information generates a heatmap of where on the page the user is concentrating at any given time. This, in turn, helps determine whether information and key navigation items are where users expect them to be.

Information Architecture Testing (Quantitative or Qualitative / Moderated or Unmoderated)

Information Architecture (IA) refers to the way that content is presented and accessed – whether through menus, breadcrumbs, categories, links or any other device that takes a user from one page to another.

Information architecture testing helps researchers define navigation, improve information taxonomy and maximize the findability of pages across a website. This is usually explored through a combination of card sorting and tree testing.

Participatory Design (Qualitative / Moderated)

Target customers of a product, service, or experience are recruited to take an active role in co-designing it.

Remote Unmoderated Usability Testing (Quantitative or Qualitative /Unmoderated)

Usability testing explores the interaction of participants with a prototype, website, app or other digital product. It captures their actions, behaviors, feedback and/or spoken-aloud impressions.

Typically, it is done remotely, on the participant’s own personal devices, or in a dedicated UX research lab.

Remote unmoderated usability testing (also known as a remote panel study) is similar to a lab study except the participants are working remotely and without a moderator. This has significant implications for the cost of repeat, or multiple, studies.

Remote Unmoderated Usability Testing has been shown to provide statistically valid quantitative data and to give rapid access to qualitative and behavioral data.

Which UX research methods are used during the delivery phase?

During the delivery phase, all of the previous methods are still being employed to fine-tune the product, right up to, and even after, its launch. When the prototype is at an advanced or even completed, stage, testing can be done with actual customers performing real tasks on a functional product.

Methods such as Clickstream Analysis, Customer Feedback /Voice of the Customer, Intercept Study, True-Intent Study, and UX Benchmarking can be important during this phase.

Clickstream Analysis (Quantitative / Unmoderated only)

Clickstreams are a record of the aggregated paths (URLs) followed by users during their navigation to a web page or pages.

Clickstreams allow researchers to view and analyze the paths that users took while performing specific tasks, showing what proportion of users followed each path, and the final outcomes (completed purchase or task, error, abandonment, or timeout). 

Some software incorporates heatmaps (showing an aggregation of the areas where users clicked on each page) to allow more detailed behavioral analysis.

Typically, clickstreams are used to help researchers evaluate the sales funnel and calculate the respective conversion rate of different journeys. 

Customer Feedback (or Voice of the Customer Study) (Quantitative or qualitative / Unmoderated)

A Customer Feedback or VoC study is an attempt to collect the ‘true’ opinions of participants when they visit a site that is already live. These studies are an important part of ongoing research after a site or feature is launched. By being ‘always on’ they gather feedback constantly in the background. So any problems that weren’t picked up before launch can be corrected, and customer responses to any future changes will be reflected in the data.

Typically, the study will take the form of a short survey. Visitors are invited to participate as they arrive at the site’s homepage or other target pages. Before they fill out the survey, they are asked to continue doing what they came to do on the site. Users spend as little or as long as they wish to accomplish their goal. Then they can start the questionnaire.

During the survey, the participants would be asked a series of qualitative questions, such as:

  • What did you come to do?
  • Can you tell us a bit about yourself?
  • Did you accomplish your goal?
  • Were you happy with your experience?
  • How likely are you to recommend the site to others?
  • How can we make our site better?

Data collected can help companies identify pain points for their customers, and fix them immediately. It also informs researchers about the overall health of a site and allows them to segment visitors and create and/or flesh out user personas.

Other ways that UX researchers can gather VoC data:

  • Pay attention to social media.
  • Listen to recorded customer calls.
  • Monitor customer reviews.
  • Measure your Net Promoter Score.
  • Conduct focus groups.
  • Offer a feedback form.

Key benefits of a VoC study:

  • Obtain valuable insights about your users and what they want from your website.
  • Find out if your visitors have accomplished their goals.
  • Analyze why a visitor’s online experience was positive, neutral, or negative.
  • Get the site’s Net Promoter Score and find out if visitors would recommend your site to others.

Note, that these studies tend to create very large sample sizes and can take a lot of time to process.

Intercept Surveys (Quantitative or Qualitative / Unmoderated)

Adding a few lines of JavaScript code to a website, or integrating a mobile app with an SDK (software development kit), allows visitors to be intercepted with an automatically triggered survey as they arrive on a particular section or page. 

This could be a general-purpose survey, such as a Customer Feedback survey, but sometimes an automated survey might be triggered to ask a single question with a very specific objective. 

For example, whenever a user exits the checkout page without completing a transaction, the Intercept Survey might want to ask “what prevented you from completing your purchase just now?”.

An intercept can be a very direct, even abrupt, method of speaking to a customer so it should be handled sensitively. It’s important not to interrupt customers while they are actually using the site effectively, especially if they are in the process of buying something. 

And in the above example, best practice might be to offer assistance as well as requesting information about the buying experience. So, for example:

“Sorry, you weren’t able to complete your order. We’d be happy to assist you and will offer you a discount for your inconvenience. Would you like to chat to an assistant now?” 

As with VoC surveys above, intercept surveys can reveal how visitors use a site or a service, whether they have been able to achieve their goals and whether they are satisfied with the overall experience provided.

True-Intent Study (Quantitative / Unmoderated)

True intent studies ask organic visitors on a website, mobile site or mobile app what their intentions are and get actionable information about their experience.

True intent is a cost-effective research approach allowing researchers to get answers to qualitative questions like: Who is coming to the site? Why are they visiting? Did they get what they needed? How are they using the site? How do they feel about it?

UX Benchmarking Study (Quantitative and qualitative / Unmoderated or moderated)

UX benchmarks allow you to measure the baseline performance of digital products and services and measure how changes are moving the UX needle over time. Typical benchmark studies are either longitudinal, in which measurements are continuously made against the same baseline, or competitive, in which products are measured periodically against competitors’ offerings.

With benchmarking, participants carry out the same tasks on a variety of websites in order to determine:

  • How their site performs relative to competitors
  • The usability, standing and feature set of a product within the industry
  • The successes and failures of competitors
  • Best-in-class examples to emulate

In order to score different sites, various measurements can be combined. Collecting both behavioral data (such as task success, task time, page views) and attitudinal data (such as ease of use, trust, and appearance) gives a sophisticated measure of relative standing.

For a description of the most popular UX research tools, and when they are applied, see the next chapter, Which UX research tools?