Six guidelines for successful prototype testing

Here are some expert tips on testing prototypes, using a range of different research methods.

“Is my prototype testable?” It’s a frequently asked question when running usability studies with a prototype that isn’t fully functional.

The rule of thumb is the more complete the prototype, the better.

I’ve run usability testing for over eight years now, and I keep reviewing prototypes that are quite limited. Despite that, some professionals new to UX research expect many insights to validate the design hypothesis based on just a few pages. If you are only ‘validating’ your prototype, maybe it is time to manage your expectations about the feedback you are likely to receive.

Testing prototypes at an early stage and failing fast is excellent. Just don’t expect a massive amount of insights if users cannot do much with them, you are asking them the wrong questions or you only run one test.

Besides, running usability tests on prototypes requires a different approach than testing a live site. Spending more time on improving your prototype and writing tasks that bear in mind any relevant limitations usually pays off.

Here are some guidelines about testing prototypes:

1) Choose the most relevant research questions based on your prototype fidelity

Usually, there are three levels of fidelity: sketches, low fidelity and high fidelity.

Research questions for early-stage sketches usually include:

  • How are you solving this problem currently?
  • Is the prototype solving your problem?
  • What does the prototype do?
  • Who is the competition?
  • Do users see themselves using the prototype? In which scenarios?

Research questions for a low fidelity prototype typically include:

  • What would users expect from the prototype before using it?
  • Do users understand what is the product for and what it does?
  • How do users feel when navigating the prototype?
  • Does anything seem confusing or unnecessary?
  • Do users understand the content, call to actions, and categories?

Research questions for high fidelity prototypes usually include:

  • Is the prototype solving the problem in the best possible way?
  • Is this product for you?
  • What’s the user’s first action when navigating the prototype?
  • What are the features that users ignore?
  • Are there blockers in the journey?

2) Define how insightful is your prototype

You must know how your prototype works in detail, and also what is not working, in order to get relevant comments about a usability issue instead of a prototype limitation.

Also, reviewing your prototype length and depth will help you to define the amount of insight you will be able to get from it. There are four levels of insights:

  • Minimum insight: In this case, your prototype is only one page (I have seen this!). You will not identify usability issues when testing these stimuli, as the user will not be able to complete a task using one page. You will get minimum feedback about user expectations, concept comprehension, and where the user would click to go to another section.
  • Limited insight: Your prototype contains few pages that only allow you to move forward and nothing else. In this case, likely, you will not identify usability issues. However, you will get feedback about concept and content comprehension and user expectations. Also, you might get insights about UI interactions, if those are available  For example, Clicking a CTA, selecting an item, or opening a message.
  • Some insight: Your prototype contains main pages and some subpages, allowing the user to complete one task. For example, looking for a product and compare against other products. You will be able to test usability but not findability, as users will quickly learn how to complete the task using the only route they have available.
  • Rich insight: Your prototype contains main pages and various subpages, allowing the user to complete three or more tasks. You will be able to test usability and findability for areas more fully designed.

3) Align your prototype with the level of insight required

If you are not happy with the level of insight you might get from your prototype, add more levels or pages to allow users to complete more tasks. Make sure to include main pages and subpages so users can navigate them and complete a task.

If you don’t have enough time or resources to improve your prototype, expect less feedback and test often.

4) Write a test script that considers limitations

  • Tell participants that they will be testing a prototype, and some areas will not be available or present some limitations. Participants should expect this behavior, as they might abandon the task after the first blocker in the journey.
  • Provide realistic content, avoiding dummy text that will generate confusion during the task.
  • Describe what they need to do accurately based on the options available in the prototype. Instead of saying: “Please make a transaction to someone using this prototype,” say: “Imagine that you would like to transfer £240 to Emma Jones, please show us how you would do it.”

5) Choose the most suitable session type to test your prototype

Moderated or unmoderated? I use Userzoom (of course!) to run UX studies for our clients.

Recruiting participants on our user research platform is faster when launching unmoderated studies. However, you can run moderated sessions to probe further. Userzoom allows you to run both moderated and unmoderated studies. It depends how much time do you have available and how many whys you need to ask.

6) Analyse your findings and re-test

Make a list of usability issues and findings and provide granular feedback indicating what is not working and why.

Not enough insights yet? Your test results are as insightful as your prototype and your research questions.

Also, testing your prototype once, will not give you enough insights to refine your product. Validating prototypes is not enough. Test often, get some feedback, iterate. Be brave and pivot if needed. You will find your way to a fantastic product!

Our comprehensive guide to running longitudinal and competitive benchmarking