12questionsUX


If you are planning to conduct UX Benchmark Studies, here are the answers to some of the questions that you might have:

1. How do you convince C-Level to do UX Benchmarking?

A: When talking to C-Level management about UX Benchmarking, tell them the benefits that they will get out of it. Let them know that you will gain critical information about how you can improve or beat the competition. Mention that you can even take away customers from them, because their site is not as easy to use or navigate as yours.

2. How many tasks would you recommend using in a benchmarking study? What is too many at which point the user may lose interest and leave in the middle of the study?

A: We would recommend four tasks for a standard study with a task that is not too complex and contains no more than two follow-up questions per task, four final questions after all tasks, and two to four pre study questions.

3. How many users (rule of thumb) would you recommend recruiting for both a between-subjects & within-subjects study?

A: 100 participants per site for a between-subjects study and 200 participants for a within-subjects study.

4. Do promoters & detractors have bias based on their own brand loyalty?

A:  Yes, they do, depending on their experience. However, you should always make sure to avoid this skew by having a good portion of your client’s non-brand users. It should be no less than half of the participants.

5. Do you have to add JavaScript to the site for a benchmark study?

A: No, with UserZoom you don’t need to add any JavaScript code. That’s why it is easy to run benchmark studies on your competitors’ websites.

6. How can you make participants follow a click path? 

A: They are not following click paths, we are generating click paths from where they freely navigate.

7. Was the branching to the American and Delta Air Lines sites managed in UserZoom logic?

A: Yes, all the logic was managed by UserZoom.

8. How were the user heatmaps generated during the “unmoderated remote testing” process?

A: We tracked the clicks on the pages that users navigated to by using a browser add-on. All participants had to activate an add-on before beginning the study. By using a browser add-on, we didn’t have to install javascript code on both websites.

9. Do you tell the participants that their answer is incorrect, and do they have an opportunity to correct it? How does that affect their likert scale questions?

A: You can tell them, but if you don’t want their answers to be affected by knowing that they did the task wrong, then you shouldn’t.

10. If you tell the user that they got the question incorrect, won’t that hinder the rest of the test by making the user feel like they are doing something wrong?

A: No, if you say it in the right way and let them know that you want to know what is difficult on your site, rather than why they failed.

11. How can the American Airlines users be so  much more satisfied when the amount of time was almost the same for both, American Airlines and Delta? 

A: Satisfaction is not always contributed to how much time it takes. If they were not satisfied with the experience, even if it was very fast to do the task, then it is more about the experience than the time.

12. Can you provide a brief description about moderated remote usability testing?

A: Moderated remote usability testing is similar to in-person usability testing, but involves a moderator watching a participant complete tasks over a screen-sharing tool (such as GotoMeeting), and asking them to speak aloud their thoughts (usually over the phone). Moderator observes where the participant has difficulties and usually records the conversation and the participant’s screen.