Working in a UX agency means that sometimes you’ll get limited budget for an entire project or for certain parts of a project.

In my time at Grapefruit I am proved time and time again that most clients will choose to reduce costs in the validation phase of a project. As designers we need to educate our clients about our desired process on a daily basis. Still, it’s often difficult for them to understand the value of usability tests. What can we do to validate our ideas, prototypes and even live products and marketing campaign websites?

In this article I will describe a few validation methods that we can use by ourselves or with the help of our colleagues. Since these methods are used internally, without the help of actual users, I’ve called them internal evaluations.

Image by Crew

Various validation methods

In the various design teams that I have been part of, I’ve often experienced contrasting practices of validating products. I’ll mention some of them:

  • Design critique — There’s much talk these days about how design critique should take place. In essence, it’s the process of sharing a piece of design and gathering feedback from your colleagues or other peers.
  • Internal evaluations — These are a set of methods which involve internal experts (colleagues from the design or other teams, stakeholders or even clients). During these methods, a facilitator will create a setup where the experts will empathize with users through various techniques and analyze the experience from their point of view. In this article I will describe two methods that I’ve used before for internal evaluations: Informal Action Analysis and Cognitive Walkthroughs.
  • Usability tests — The most efficient method of validating a product, from a qualitative standpoint. In a usability test, a facilitator will invite a real or potential user to try to accomplish specific tasks (in the context of the product) while being observed and asked questions.
  • Analytics & Questionnaires — These are tools that will help you discover good or bad patterns of use in your live product. Hundreds of such tools are available online and offline.

In a perfect environment, all of these practices will be used in a particular project. And we’ve already established that the perfect environment doesn’t exist, so I’ll continue the article describing how an internal evaluation works.

Image by Alejandro Escamilla
Image by Alejandro Escamilla

Internal evaluation

The two internal evaluation methods that our team likes and have used most are Informal Action Analysis and Cognitive Walkthroughs. Let’s discover how they work and how to set them up.

Before getting to understand each method, I’d like to mention that both of them are task based. This means that the expert that’s about to evaluate a functionality of a product will have to do so starting from a specific task. A task is similar to a Job-to-be-done (“The customer job-to-be-done starts when they want something that will improve their life, and it ends when they obtain or give up on obtaining the object of their desire”), except it’s more specific and detailed. In a good design process, tasks are determined in the research phase of the project.

Let’s take a look at a specific example of a task. In the following example I’ve considered that the product is an ecommerce website that sells electronics, and the user’s task is to purchase a specific product. The task might be framed like this:

I want to buy a smart watch. I’d like the watch to be black. The only features I’m interested in a smart watch are sleep monitoring and counting my walking steps. It’s really important that the watch has a good battery and that I won’t need to constantly charge it. I want the watch to be under 100 dollars. I’m brand agnostic, but I’m aware of the big brands in this niche.

I will use this example of a task to describe the two internal evaluation methods mentioned above.

Informal Action Analysis

Informal Action Analysis is a method where experts have to accomplish given tasks and note down each of the steps they went through in his flow. They will then write down encountered problems of layout, usability, perception and potential ways of improvement at each step.

Following the above mentioned task example, an expert might write down something like this:

I want to buy a smart watch. I’d like the watch to be black. The only features I’m interested in a smart watch are sleep monitoring and counting my walking steps. It’s really important that the watch has a good battery and that I won’t need to constantly charge it. I want the watch to be under 100 dollars. I’m brand agnostic, but I’m aware of the big brands in this niche.

  • Step 1 — I went to the homepage of the e-commerce website.
  • Step 2 — I used the search option to search for ‘smartwatch’.
  • Step 3 — I arrived on the search results page.
  • Step 4 — There are way too many results, so I tried to narrow them down. I used the filtering system and was able to filter the products by price, brand, stock availability, ratings and size.
  • Step 5 — I selected the desired product, which was a Withings Move smart watch.
  • Step 6 — I went through its description, images and reviews.
  • Step 7 — I selected the color black and added the watch to my cart.
  • Step 8 — I proceeded with the checkout process.

An expert might then write down encountered problems for each step: maybe the available filtering system is not enough for the user, maybe the user expected to be able to make a comparison between two products and such a feature is not available, etc. Layout problems (ex. “The ‘Checkout’ button was not visible and I had trouble looking for it”) and wording problems (ex: “The link ‘Discover more’ was not very self explanatory. I didn’t understand what I was going to discover after clicking on it.”) might also be written down by experts.

After a number of experts have done the analysis (from our experience, four or more), patterns will begin to emerge. It’s then the facilitator’s job to collect all the insights, to rank them by importance and to turn them into solutions.

Cognitive Walkthroughs

Cognitive Walkthroughs are a similar method to Informal Action Analysis, as the involved experts will also be given specific tasks to accomplish.

In addition to each task, the experts will also be given the actual steps that the designer thought of and used to develop the specific user flows and functionalities. For each given step they will have to write down answers to the following questions:

  • Does the user understand that this step is needed to reach their goal?
  • Will the user see the correct action they need to perform to produce the outcome?
  • Will the user recognize that this (not another) is the action they need to take?
  • Will the user understand the feedback?

Let’s see how this would work for our example and assume that the expert is given the same task, with the following steps to go through:

I want to buy a smart watch. I’d like the watch to be black. The only features I’m interested in a smart watch are sleep monitoring and counting my walking steps. It’s really important that the watch has a good battery and that I won’t need to constantly charge it. I want the watch to be under 100 dollars. I’m brand agnostic, but I’m aware of the big brands in this niche.

  • Step 1 — Go to the homepage of the e-commerce website.
  • Step 2 — Press on the option ‘Categories’ and select ‘Smart watches’.
  • Step 3 — On the smart watches category page, use the filtering system to filter the products by price, brand, stock availability, ratings and size.
  • Step 4 — Select the desired product from the ones available or return to Step 3 and repeat the filtering process.
  • Step 5 — Select the color black for the product.
  • Step 6 — Add the product to your shopping bag.
  • Step 7 — Proceed to checkout.

For each of the 7 given steps, the expert will have to answer to the four questions mentioned earlier. Here’s an example of how an expert might answer the four questions for Step 2.

Step 2 — Press on the option ‘Categories’ and select ‘Smart watches’.

Does the user understand that this step is needed to reach their goal?
– Not necessarily. My initial reaction was to go to the search option and type ‘smartwatch’. The ‘Category’ option is not that visible.

Will the user see the correct action they need to perform to produce the outcome?
– In this particular example, ‘Category’ was not a fairly visible action I new I could take. The button on itself was barely visible, although I’m used to its position and that wasn’t necessarily a problem.

Will the user recognize that this (not another) is the action they need to take?
– Again, my initial impulse was to discover the smart watches list by using the search option. My answer for this question is ‘no’.

Will the user understand the feedback?
– After clicking on the ‘Categories’ option, it displayed a list of all the product categories on the website and it was the anticipated result. It was then easy to select the category of ‘Smart watches’.

Again, after a number of experts have done the analysis, it will not be difficult to discover patterns. This method is good in particular because it indicates better perspectives of analysis.

Image by Tran Mau Tri Tam

How to set up an internal evaluation

A few months back one of our clients at Grapefruit asked us to perform an evaluation for their loyalty programs platform. They wanted to make a series of A/B tests on this website and needed some starting points. Our job was to discover some of the bigger problems users encountered on this platform. Since they were already familiar with our internal evaluation approach, we decided to use it for this project as well.

We kickstarted the project with a workshop involving all the stakeholders. We used the workshop to refresh their memory of how these methods work. We also outlined the most important actions users can do in the platform.

After the initial workshop we began the actual work. We wrote down the tasks (10 tasks in total), aiming to describe the most important actions on the platform. The tasks are the same for Informal Action Analysis and for Cognitive Walkthroughs (and are complemented by the desired steps for CW).

Every colleague at our agency is more than welcomed to participate as an expert for these sessions and this project was no exception. We invited eight colleagues from teams like design, development, content and even administrative. Some of them were new to these methods and some of them had already participated in past evaluations. We organized a meeting where we presented the two methods and made sure that each one of them knew exactly what to do.

After we settled on the group of experts, we divided them into two teams, and also the 10 tasks into two sets The evaluation would then go like this:

  • First team would have to evaluate on the first set of 5 tasks, using the Informal Action Analysis method (two of the experts on desktop, two on mobile).
  • Second team would have to evaluate on the second set of 5 tasks, using the Cognitive Walkthroughs method (two of the experts on desktop, two on mobile).

The evaluations are done individually. Since all of them were involved in other projects, we decided to give them three days for the actual evaluations. After each colleague was finished with their evaluation, we switched the tasks, the methods and the types of devices (desktop/mobile) between the two groups, as follows:

  • First team would have to evaluate on the second set of five tasks, using the Cognitive Walkthroughs method (two of the experts on desktop, two on mobile).
  • Second team would have to evaluate on the first set of five tasks, using the Informal Action Analysis method (two of the experts on desktop, two on mobile).

After another three days of evaluations, we gathered the insights and started working on a very detailed report of problems, divided into 3 clusters: Crucial Problems, Important Problems and Less Important Problems (more than 60 problems were discovered for this specific internal evaluation).

For each of the problems we suggested specific solutions (with sketches, where it was necessary). The report was compiled into a presentation where we also included the description of the two methods involved and how we set up the evaluation sessions, for future references.

What did we learn?

Here’s a few tips that we’ve learned after this and other evaluation projects:

  • Don’t underestimate the time that an expert is going to need for each evaluation. Some will do it in no time and others will take unexpected amounts of hours for the tasks. Usually, Cognitive Walkthroughs are more difficult to understand and evaluate with.
  • Build a spreadsheet template for the answers and ask experts to use them during evaluations. This way it will be easier to collect insights after the evaluations are complete and you’re working on the final report.
  • It’s better to ask colleagues that are not new to these types of evaluations, as experienced ‘evaluators’ will discover better insights.

Conclusion

We’re constantly surrounded by articles, books and courses about the ‘perfect process’. A lot of the times this ‘process’ is not possible and it’s our responsibility to make the most with the minimum tools available. Internal evaluations won’t replace usability tests. Internal evaluations will help you discover potential problems of a product, with less effort.