Card Sorting 101: A User Researcher’s Guide
So much more than the game of Solitaire.
As part of our UX 101 education series about the different types of studies and user research methodologies you can use with our own user research platform, in this article we’ll introduce readers to the exciting world of card sorting.
What is card sorting?
Card sorting also helps designers, developers and stakeholders understand which changes need to be made to the structure based on user insights, rather than relying on internal opinion.
Typically speaking, there are two kinds of card sorts:
- Open Sort: Users are asked to sort content items into groups and then give the groups a name.
- Closed Sort: Users sort content items into predefined category names.
What are common use cases for card sorting?
There are two common use cases for card sorting:
Defining a navigation structure to use in the beginning of the design process. Often times this is when an open card sort is used – to get direct feedback about users mental models to help provide feedback and direction to the product team.
Investigating how to improve a current navigation structure with the end goal of redesigning it. In this scenario a closed card sort is often used to validate results from previous open sorts/taxonomies.
How do card sort studies work?
Card sorting works by asking participants, either in-person or remotely, to organize content items in a way that makes sense to them.
Your design team will no doubt have some hypotheses about how the content should be laid out in such a way that will add to the user experience and will want to test these against behavior gathered from a card sort.
So first things first you’ll need some participants – the question, as always, is how many?
A good rule of thumb is that quality results can be obtained from 15-30 participants in a face-to-face context (Nielsen 2004; Tullis and Wood 2004). For online card sorts it’s ideal to obtain at least 50-75 participants. A benefit of conducting online card sorts is being able to gather results from hundreds of participants at no additional cost except for recruitment.
Of course, the right number of participants required is based on the level of precision needed around the proportion of cards in each category. Most card sorts we conduct have between 50-150 participants, which provides a margin of error around 7-9% for a 90% confidence interval level.
How many items to include in a card sort:
The time to complete a card sorting study depends on the number of items that need to be sorted. As always, it’s wise to avoid testing fatigue as the quality of your results will decline. A general rule of thumb is that it takes approximately 20 minutes for sorting 30 items into different categories in an open card sorting test. Typically closed sorts take participants about half that time.
How many card sorts are needed for a site or app:
Keep in mind that multiple individual card sorts are necessary when different user groups use different parts of the site or app. This means that it is wise to plan out the different personas and hone in on all the various sections you’ll need a card sort for early on.
Combining card sorts with tree testing:
You may want to consider running a tree test after your card sorts as a means of validating the results of your card sort. Tree tests measure the findability of the items using the new IA.
We also recommend that you create questions with open-ended responses to collect more information from participants before and after a card sort. This can be especially useful for an online closed card sort.
When should you use card sorting?
Here are important facts to consider while deciding on whether or not a card sort is the right approach for your research goal.
- Online card sorting is a simple way to get quick feedback on how your users think about your site structure and navigation and ways to improve it.
- In-person card sorting can be more time-consuming and resource intensive.
- Can be biased because it is not based on the actual tasks users would do on your website.
- Analysis takes time, especially when using open-card sorting method because it depends on the complexity of data sets and whether results are consistent between the users.
What results do you get?
The results of your open card sort will come in the form of a dendrogram, which will look something like this:
It shows how participants grouped certain items together and is calculated following specific mathematical processes related to hierarchical clustering standards.
The results of a closed card sorting study come in the form of a frequency matrix for either your items or categories, which looks something like this:
The results shown in the dendrogram and frequency matrix can largely be broken down into two different types:
- Which items do users expect to appear together most often
- How many and which categories make sense to the majority of users (for open card sorts)
If you asked any open-ended survey questions you will see those results in the form of a text field.
Tips for analyzing your results
If the dendrogram (figure 1) seems a little scary to try and comprehend, first and foremost, don’t worry – you’re not the first or the last to feel that way! Here’s a quick breakdown to help you make sense of the data.
The dendrogram sections with different colors shows the groups that were created based on the participants’ answers in an open card sort. Point nodes (the yellow circles) visually call out the level at which items, sets of items, or clusters are connected. In UserZoom you can click on the point nodes to highlight the items that have been grouped together and to see the three most common category names given by participants.
See the scale from 0 to 1 on the bottom? That represents the linkage distance. A distance of 0 means all participants grouped the items together. The greatest distance, 1, means that no participants grouped the items together so there is no relationship between these two items.
As far as the frequency matrix goes it’s more straightforward – the example (figure 2) displays the percentage of participants that agreed the item belongs under a particular category as a measure of confidence. For example, there was agreement by 67% of participants that the item “Sickness, injury or death…” belongs in the “Worldwide…” category.
You’ll have to decide what the target threshold of agreement should be, but we recommend at least 75% or higher as a general guideline. The higher the percentage, the more confident you can be that it will match your users mental model.
Want to learn even more about Card Sorting?
After finishing the 52min course, you’ll be able to:
- Know when to use a Card Sort
- Know the type of data you can collect with a Card Sort
- Create a Card Sort in UserZoom
- Interpret the results of a Card Sort
Mina is a UX Researcher at Google. Previously she was on the Professional Services team at UserZoom in San Jose, CA. Mina has her PhD in Information Systems with a concentration on Innovation with User Experience. In her spare time Mina enjoys playing guitar and doing yoga.
John is a User Experience Researcher with 10+ years of practical experience in a broad range of quantitative and qualitative user research, analysis, and design for software and web products. John is focused on helping companies learn how to make their web and mobile products easier to use.