Lee Duddell, UserZoom’s Senior Director of UX Research, reveals some of the mistakes that newbies make when they’re running tests for the first time and how they can avoid them. 

lee duddell at betterux 2019

In the following video and transcript of his talk at BetterUX London 2019, Lee takes a deep-dive into some of the most common mistakes newbie researchers make and how these can be avoided through training, guardrails and teamwork. This can also help you democratize UX throughout your organization and team by training rookies and non-researchers alike in the ‘right’ way to do research.

Every couple of weeks we’ll a share a different presentation from BetterUX 2019, including ones from FlixBus and Booking.com. However if you want early access to all seven videos from the event, you can sign-up to view them right now!

Access videos

In the meantime, take it away, Lee…

So how do we help? So there’s a lot of talk today about democratization and how do we help newbies to UX testing actually run good quality tests. I’m gonna spend a few minutes explaining why you should listen to me. Then we’ll look at some mistakes that newbies make when they’re running tests for the first time. We’ll look at how to avoid them. So we just heard about training. I’ll talk through how we actually run Agile training workshops at UserZoom for our customers.

Why should you listen? Well, if you’re a researcher you’re gonna learn how to run training sessions and to support and to enable non-researchers to run their own design tests. If you’re not a researcher then you’ll learn what some of the common problems might be or the common mistakes that you might start making yourself if you were to start running research today.

Why should you listen to me? At UserZoom over the last year and a half, we’ve trained hundreds of non-researchers to run research and we’ve done that through in-person Agile training workshops, through one-to-one builds with people as well and we’ve developed those workshops and as well as developed other assets to support the democratization of testing. And I’ve worked alongside my research partner colleagues in EMEA, so in Europe and also in North America to make that happen. So lots of experience of training people who are not researchers to run their own research.

I thought we should start with a persona. So Matt is gonna be our persona.

He’s a designer. He’s been… don’t laugh at him. He’s… so he’s a designer. He’s been designing… you know, he’s been in the role for maybe three years at our organization. He’s only ever worked in Agile. He doesn’t know what Waterfall is. He doesn’t know what lots of our documentation means for example and he likes his beer like he likes his beard, which is crafted. I wondered if that would work. And he doesn’t like socks and he won’t wear socks. But he’s highly motivated to run his own tests because although he works with a great research team he’s slightly frustrated that they can’t always answer his very tactical small design questions that he has.

But more so his main motivation…and this is if you’re thinking about how do we pitch this to designers, how can we pitch to people who don’t have… to people like Matt who don’t have experience running research, the main way that running tactical research themselves can help them is to delete debate. To delete that ongoing repetitive debate within a design team or within an Agile team where they’re looking at version A and version B and debating time and time again as to whether they should do A or B or maybe create C. They can put that to users to remove that debate. And that’s one of the main motivations that people have for running their own research.

So in this Sprint… oh, by the way, he works at, and you probably could’ve guessed, a sushi delivery company. You can order online. It’ll come through and… yeah, be there very quickly. And in this Sprint, he’s looking at the kind of voucher code part of the experience where you put in a voucher code to get some money off.

So that’s his focus. And he’s created a design. And it’s great. And so he logs into UserZoom and he starts building his study. Let’s look at what he might put together without any training or support or enablement.

He builds a screener

Anybody tell me what might be wrong with that screener?

Sorry. You’ll have to shout. It’s not open enough. It’s kind of too easy to guess what the right answer is. I mean, he’s gonna get a great participation rate because any participant’s gonna go, “Well, I think I will get paid if I say yes.” So… and we see this a lot for untrained or unsupported non-researchers that they’ll ask these kind of questions in screeners.

Okay, so let’s say he’s fixed his screener to make it a little bit more obscure as to what the right answer is and what the wrong answer is. He then uploads his design and as a participant, this is the first screen that I see after I’ve just got through the screener.

And I’m not gonna go into the detail of this. This is a click test so we’ve uploaded an image and we ask people to perform an action on there. And it says, “Where would you enter the voucher code?” What might be wrong with that?

No scenario. Exactly. Spot on. So I’ve seen this a lot when we’re training people and a lot of people get so…it’s kind of inward looking, they’re so excited about the design and what they’re working on at the moment, so passionate they just, “Great, we can test this quickly. Let’s put this in front of people. Everybody will know what that is.” But these are people who aren’t in your organization. These are people sat at home and for them suddenly to be asked a question like, “Where do I put in the voucher code,” and you’re right in the middle of a checkout, that’s quite different to a first impressions question. So yeah. So he needs to set a scenario.

So he sets a scenario.

Now, I won’t read it all out. It’s Friday morning and you’re at work in a swanky office in SoHo. The team you’re in have been… it goes on and on and on and on. And this is quite typical. This is really not unusual that people, again, it’s kind of their internal passion comes across and they want to express that and share that and set a scenario for participants that’s kind of in their mind really gonna get them into the right frame of mind if they go on to do a study. What might be wrong with that?

Too long. Yeah. It’s both too long and most of it’s irrelevant as well. Yep. And if you think about the mental load of a participant there, what do they even need to remember? What really, really matters? What might be a better scenario?

Something like that, yeah. Or you’ve just…yeah, it doesn’t have to be too sophisticated. Absolutely right.

So he then thinks, “Well, actually I’ve been working on two designs here so I’m gonna ask people to compare two designs.” And he sets up a question on UserZoom. Other platforms are available. And he asks people literally like this, “Which design do you prefer?”

It’s A versus B and they’re both…and everybody sees both designs. And what might be wrong with that approach? Because that seems like quite logical doesn’t it, to do that? Researchers?

Bias. Yep. So there’s potential for bias there if you present A before B. You could set it up so you could do B before A but I guess for me the key thing there is that that’s very attitudinal. You know, somebody can prefer one design to another. That looks, you know… let’s take the Amazon website and compare it to another website that looks better, that looks visually better. Which design do you prefer? You might say, “I prefer the design of the other site.” Amazon doesn’t do bad though, does it? So it’s that thing of… I think what’s happened here is the designer, Matt, has taken his research question and then just literally applied that into the study. And we see that quite a lot as well. All of these are real examples from where we’ve trained people.

He’s finished with his flat visuals and now he’s gonna run a… I think Louise spoke about it, a think out loud study where he puts a prototype, this happens to be an Invision prototype, into UserZoom and ask participants to try and achieve something, to try and do something and think out loud while they do it. It’s a great way to identify points of friction.

At the bottom it says…this is the task he’s giving them. “You want to order some sushi for the office.” What do we think of this? We can pretend he’s put in a scenario as well.

Hundred percent right. Sorry, there was another one as well who said no actual task. Yeah, it’s just like you want…it’s not…so even with a scenario it’s not…you want to order some sushi for the office and if I’m a participant I’m like, “Do I? What do you actually want me to do or to try and achieve here?” So again this is quite typical of just kind of expecting the participants to guess what it is that they want to do…we want them to do. Summarize those five very quickly.

And these are probably the five most common ones we see. And I’m focused very much on test setup at the moment rather than analysis but let’s focus on that. Easy to guess screeners with yes, no. No scenario at all, just jumping straight into designs you’ve been working on. Too much scenario, so it becomes a bit of a mental overload for participants. Asking too much about a design. Not really translating a question, a research question into tasks. Just translating that research question into a question for participants. And then setting tasks that aren’t actually tasks, aren’t actually things to do or things to try and achieve.

Have you guys got any examples of what your Matts have done that you’d like to share?

Sorry. Would you bother to click on this? That’s great. That’s great because that’s…yeah. So that’s like clearly some kind of call to action study or something like that, isn’t it? Would you bother to click on this? Okay. That’s cracker. Yeah. Anybody else got one? It’s gonna be hard to beat that I think, isn’t it? Okay.

So how do we help Matt? We can’t just sit here and laugh at him. Well, so there are three ways and I think the guys at booking.com touched on this as well.

The in-person training is the start, so run a training workshop internally for Matt and his colleagues. Secondly, to encourage teamwork so there’s lots of sharing and working together on studies. And thirdly, set up some guardrails so that Matt is not… I don’t like to use the words forced or constrained but we… certainly for his first few studies and to begin with he’s only running a certain type of study, only answering narrower design questions until he becomes more confident and more competent as well. Let’s go through this.

Is this on? Yep. You can hear me? Great. So this is how we structured our Agile training workshops at UserZoom that we deliver for our customers and you might want to use this as well.

The first… there are five main sections. You start by getting people excited, and if you’ve got one, you share an internal case study of where testing has made a real difference to an Agile team, emphasizing that it’s helped them delete debate as well as maybe improve some metrics too.

The second thing is you take them to the stages of the testing process. None of this is rocket science. You start with the research question or maybe a business question. You build a study and you see what I’ve highlighted there is you test your study and it’s that testing and review of the study tasks and questions that Matt will improve the quality of the study that he produces and then you launch and of course look at results.

If you’d like to know more about how UserZoom can help test, measure and improve your own site’s UX, please get in touch!

Contact us!

 

You then… and we’ll have a look at how we do this next part in a moment. You then map research questions that Matt and his team might have to specific studies and then build, with Matt and his colleagues in the room, some studies together and critique them. I’ll show you how to do that in a moment.

So here’s an example of a slide from our training material where we start mapping design questions to a type of study.

Now on the subject of call to actions, CTAs that don’t work, there’s meant to be a link in that button to a click test. But basically what you start discussing within the training session with the team is what is a research question, what’s a valid research question, how do you structure one, what kind of research questions do you have and there are some examples up here like how can I improve this new page design. Should we go with version A or version B? Which performs best?

And you start talking about those questions and the type of assets that are available such as an early stage design and count…and then show some examples of how you can answer those questions as well. You don’t have to use UserZoom for that. You could use multiple platforms but the key thing…and I think it was Brooke who described it as… I can’t believe I’m gonna use these words, “Pretty dope.” I’m gonna have to hand my passport back.

Who described it as pretty dope to show this kind of thing and this is the…you know, there’s a reason we’ve looked at click tests here and that these are for, certainly the design community but a lot of people who’ve not run research before, it can seem a bit abstract but if you show the results like this then it really kinda brings it to life for them. So I would suggest you do that in the training.

Then get yourself on a build and critique mission.

What do I mean by that? Well, the way that we run it is say in a team… in a room full of 12 people. They pair up. They all… and one of them logs into UserZoom and they create a study. And it’s like a build along. So you’re with the front and you create the screener and they create their screeners. You create the task and you upload an image and you ideally do it if you can on a project that’s live with them at the moment so it’s meaningful. And everybody does it and you just do like one task and maybe a couple of questions and then they email to you at the front the six studies they’ve done and then you humiliate them by reviewing what everybody put in. And that is actually the best way to get people to think outwardly in about their studies rather than inwardly out.

So all of those five problems that we looked… mistakes that we saw at the beginning, the best way to counter all of those is if I have people make those mistakes, share them and then as a group identify how to improve after that. That is the best way. It’s not enough I think just to tell them, “This is how you do it.” You have to let them fail. Do it in a classroom environment as well.

It’s worked well. We’ve been doing this for a year and a half. The other thing you should do… there’s been a lot of talk about teamwork. What time am I…

Nobody knows? I’ll just keep going. Sorry, plenty of time. What time am I meant to finish? Quarter past? Quarter past? Wow. Let’s slow down a bit. So yeah. So how’s it going? So the other thing is to encourage teamwork. There’s been a lot of talk about teamwork and Agile teams should be well into teamwork. They’re supposed to be autonomous teams and there should be lots of overlap in their roles and what they do within a team. But one thing to suggest that they do… because they should leave that workshop not only equipped to start running basic studies but enthused to do so. So you wanna capture them there at the end of the workshop and say, “You need to set time in your Sprint to have a quick chat about research questions ideally at the beginning of each Sprint to identify what it is, what your business questions are that you want to answer during that Sprint.”

You must peer review within your team but ideally outside the studies that you create for the…and again telling them this at the end of that training session when everybody’s seen how tricky it can be to write good tasks and write good questions. That’s a time to capture them there and suggest that they do that. And of course… and this I think is the other theme, is the changing role of researches, that you can now be… they can lean on you. They can rely on you to review certainly their first few studies to give feedback on those and that wouldn’t necessarily slow them down. And we’ve been doing that at UserZoom through our research partners for a couple of years now as well.

And the third thing, guardrails.

So it’s a good idea to start them with a couple of study types or methods. Now you might know what’s best in your organization. Typically what we do is we run image-based studies. So that’s where there’s a flat visual or a flat design or maybe a screenshot. And we train Matt and his colleagues to run those alongside the think out loud studies which is where you can upload a prototype and people give spoken feedback. Start people there. Don’t try to teach them every single research method in the world. Start them with two that they can understand and get going with. I think Louise, you’ve said that it’s pretty much the think out loud was the big focus for the team. Yeah. So it seems to work.

You can save time by… rather than having people to think about screeners all the time just set up a bank of screeners and set up a bank of questions and tasks and templates for people to use. Those act as a guardrail so that there’s kind of best practice within their account to use.

And Alfonso mentioned it earlier, there’s this thing called UserZoom Academy. Now, whether you’re using UserZoom or not you can access this and you can give people…you can point people at short video tutorials to enable them to not just build studies in UserZoom but to understand how to go about the research process as well.

So URL is at the top, academy.userzoom.com, and we’ll be building that out over the rest of this year. [Editor’s note: It’s live now!]

So there’s three things to remember to help Matt. Been through quite a lot there. First of all, writing the tasks and questions which might be, for those of us with a research background, might be quite natural. It’s actually quite difficult to do. But if you run a training workshop where you have people do that and then share what they’ve created, that’s a great way to allow them to learn from getting it wrong. And start them off with two simple study types as well and I would recommend image-based and a think-out-loud as well.

If you’d like to know more about how UserZoom can help test, measure and improve your own site’s UX, please get in touch!

Contact us!