The Task-Based UX Benchmark Study

UserZoom ran a quick task-based UX benchmark study between Merck and Bristol-Myers Squibb, two well-known pharmaceutical companies, to compare the experience of finding product information online.

  • We ran the unmoderated remote task-based benchmark study with 60 in-the-wild users on their own devices over the course of a single day
  • We split participants equally between each website so that 30 went to Merck and 30 went to Bristol-Myers Squibb
  • The Online Task: You are taking Zolinza/Orencia and want to find the patient information page. When you have reached the patient product information page we will automatically redirect you and ask a few questions.
  • We validated the tasks via URL validation

conclusion anchor button

Let’s Get Ready to Rumble!

We showed all participants an image of Merck’s and Bristol-Myers Squibb’s homepage and asked them to choose which site they associated with several UX attributes.

Once more we see that user first impressions tend to be more favorable towards the homepage with minimal scrolling and text (Merck) while the homepage that has more text (Bristol-Myers Squibb) is usually considered both Informational and Overwhelming.

In particular, note just how vast a difference there was in regards to being seen as Easy and Welcoming.

Screenshot Click Test

We split the participants evenly and asked where they would click if they wanted to check on the company’s stock prices.


60% of participants correctly clicked on Investors in the header.

Bristol-Myers Squibb

57% of participants correctly clicked on Investors.

Tree Test

We split the participants evenly and asked where they’d be able to find information on upcoming clinical trials.


93% of participants correctly found Clinical Trials under R&D, but only 67% found it on their first try.

Bristol-Myers Squibb

80% of participants correctly found Clinical Trials under R&D, but only 50% were able to find it on their first try.

Online Task Outcomes

We asked participants to find the patient information page for a prescription drug – Zolinza/Orencia respectively. In order to validate whether users were successful at the task we validated via the URL of the patient information page for the product.

If users got to this page they were labeled as Success.

Non Success meant that a user either Abandoned the task due to difficulties with the website or said they had found the patient information before reaching the correct page, which we labeled as Error.


Both sites had a majority of users successfully complete the online task, but users on Bristol-Myers Squibb’s site were 6% more likely to successfully complete the task than Merck’s users.

Both sites were almost neck and neck in regards to how efficient their successful users were. On average, though, Merck’s users were narrowly able to complete the task in less time, pageviews and clicks.

Merck user session

Bristol-Myers Squibb user session

Non Success

Merck’s unsuccessful participants were more likely to err than abandon, meaning they felt they had reached the correct page before they actually did. When the user who abandoned was asked to explain why they abandoned they said:

  • “It was too hard to find the patient information. There was too much information.”

Unsuccessful participants on Bristol-Myers Squibb’s site were split evenly between erring and abandoning. What’s interesting is that the time, pageviews and clicks are similar to the successful participants. When the users who abandoned were asked to explain why they abandoned they said:

Merck user session

Bristol-Myers Squibb user session

Ease of Use

After the tasks we asked all the users to rate how easy or difficult it was to accomplish, with 1 = Very Difficult, 4 = Neutral and 7 = Very Easy.

A majority of participants on both sites ranked the task relatively high on the ease of use scale, but in the end, Merck ended up with a higher overall ease of use rating.

Problems & Frustrations

We asked the users which, if any, of the following problems they encountered while on the site.


Participants who selected “Other” were given an open-ended follow up question to explain. The users who chose this for Merck said:

  • Couldn’t find the medicine I was looking for
  • My own fault, misread the task as finding the patent info – that’d be interesting to include
  • Text was too small to read
  • On the product page the menu has picklist arrows which made me assume when I clicked on the section a list would open underneath, which it didn’t. Never seen the arrows used and that functionality wasn’t included – kinda confusing

Bristol-Myers Squibb

Participants who selected “Other” were given an open-ended follow up question to explain. The users who chose this for Bristol-Myers Squibb said:

  • Site seemed dated
  • What’s the difference between “Product” and “Product Home”? If it goes to the same area, probably better to just have one link

Brand Perception

We asked participants to rate their perception of the brand before and after their experience with the site.

For the rating scale: 1 = Very Negative, 4 = Neutral, and 7 = Very Positive. We also included the option for participants to say they weren’t familiar with the brand before the task.

Merck saw a slightly dip in their brand perception post-task, but still ended with a positive 5.2 out of 7. Bristol-Myers Squibb, interestingly enough, stayed even with a 4.9 throughout the study.

Net Promoter Score (NPS)

After participants interacted with the sites we asked them to rate how likely it was they would recommend them to friends, family or colleagues.

Both sites were once more neck and neck, but Merck beat out Bristol-Myers Squibb by only one passive count, bringing their likelihood to be promoted just ahead of Bristol-Myers Squibb.


This was an incredibly close battle this week – both sites were very close and we weren’t sure who would come out ahead until the very last participant finished the study. The winner, however, in this week’s usability battle is Merck! Despite having 2 less successful completes than Bristol-Myers Squibb, we had to take into account the fact that their users were faster, less likely to abandon, reported fewer problems & frustrations, reported a higher ease of use, and had a higher success rate in the taxonomy test. Shout out to both sites for making this a truly difficult decision.

Want to learn more about competitive
UX Benchmarking with UserZoom?

personalized demo