We recently had a webinar with Jon Innes of UX Innovation to discuss how to improve your sprints by incorporating UX/usability metrics that the whole product team can use to measure progress on your agile UX journey. During the webinar people had a lot of great questions. Given the popularity of the topic, and the frequency with which we hear these questions being asked, we shared your questions with Jon and transcribed his answers in order to share them with our readers. Enjoy!


Q: What do you recommend to a startup without any existing active users when it comes to finding and recruiting test users for a consumer mobile app?

A: It depends on how specific your user profile is. If it’s a common consumer profile you might use a marketing firm or use the UserZoom panel service.


Q: What is your advice around planning user research and fitting into Agile sprints?

A: I’d recommend looking for long term sources of users and planning studies every few weeks or once a month depending on how fast you are moving. If you are moving really fast and have lots of hypotheses to test I’d even consider planning ongoing studies with findings every week if you can.


Q: When do you recommend doing in-person user research?

A: Early on in a project there are often a lot of nuances that are best analyzed via qualitative methods like in person interviews or 1:1 remote usability tests. These can often serve as a good basis for formulating hypotheses that you can only answer later with quantitative methods like surveys, A/B tests or RUT.


Q: Can you talk about how to handle developer cynicism about UX (or Product) Designers as a “priesthood” and who point fingers if initial user feedback isn’t all positive?

A: Work with the larger team to set measurable UX goals for the product. Once you have those set, as long as you can show that your designs are improving the metrics that measure progress towards your UX goals, most developers being to appreciate the value you add as a UX designer.


Q: Can you talk about the misnomer of MVP as MINIMAL?

A: The misunderstanding around “minimal” is that it refers to the minimal set of features that are required to determine if it meets users needs vs. the minimum quality that product needs to achieve. Frank Robinson, the inventor of the term, explains MV well here.


Q: Do you have any recommendations for cases when customers don’t give access to end users to test their hypotheses and prototypes?

A: This presents a major issue. The best thing you can do in this circumstance is find a suitable proxy for these users that you can use during testing. I’d also recommend trying to find another customer who is more open to participating in the design process.


Q: Where should UX fit in the Agile process? Specifically – should it be built into the typical development workflow which goes something like: 1.) Open 2.) In Development 3.) Done? We use Jira and I’m curious if UX should be implemented into the workflow or if it should exist before and just have the workflow be for development?

A: Jira is often used just to track development tasks in many teams. I suggest you work with your Scrum Master to create Jira tasks for wireframes that can be used during sprint planning before the team picks up the development tasks, as a start. Keep in mind that you’ll probably need to do some detailed design during the sprint, as well as some research tasks to inform the next sprint. The key is making sure that tasks in Jira aren’t considered “done” until they pass some type of user-centered criteria.


Q: How do you apply the iterative testing approach to Enterprise products which have long (lets say, a year long) release cycles?

A: With long development cycles, the key is to break down parts into smaller chunks of work that you can complete in less than a month. If a project takes a year to build, some parts should be testable after a month or so, and you can use the later sprints to improve things done earlier as well as complete the remaining work.


Q: How do you automate UX testing?

A: Define tasks in a way that can be presented to users via online instructions and provide them access to a tool like UserZoom to measure users ability to complete the tasks, as well as gathering feedback on satisfaction.


Q: How does a deadline-driven marketing team fit into the Agile/Lean UX process? With massive print/TV deadlines, I find UX research is the first aspect to be dropped.

A: Most marketing groups understand the idea of running pilots to gather feedback on the effectiveness of the materials. Work with the team to run A/B tests as part of the process. Make sure to set good metrics with the campaign owner that can be evaluated so future campaigns can be compared to prior ones.


Q: If development works on tasks while the design is being validated, resulting in rework for development/testing later, is that normal?

A: Yes, it is normal. Agile emphasizes iteration. Make sure the team knows the difference between incremental development and iterative development. Agile projects should be both incremental and iterative in nature.


Q: My company adds goals to each backlog item within a sprint, and leverages ‘dual track agile methodology’ – and backlog items span multiple sprints. This doesn’t seem like good practice. Any recommendations on how to make this process better?

A: Make sure that each sprint results in something that is done enough to evaluate both in terms of user feedback and complete working functionality. Be careful when using multiple agile teams to build things that the efforts converge into something valuable to users and customers every sprint.


Q: Should we test wireframes or designs when it comes to evaluating UX? Does it make sense to test both? Thanks!

A: There are cases where testing wireframes makes sense. Especially when you are evaluating concepts, terminology and task flow. If your focus is on detailed design, wait until you have higher fidelity mockups or prototypes, or even working code.


Q: What do you recommend for a strategy when a site has proactive chat? Specifically when looking to use an intercept for feedback, and wanting a representative sample, but not wanting to negatively impact chat.

A: I suggest you intercept them before they start the chat session. I’m not certain what the chat session is for (support for the user trying to use the site?) but you don’t need to intercept every user to run studies.


Q: What do you think about mock-up tests with employees of a company vs recruited participants?

A: If what you are testing doesn’t require any domain specific knowledge, then testing with employees is a good shortcut. However – I recommend you eventually test with actual users. You’ll find real users may have different skills or knowledge than people working at tech companies. 


Thanks Jon, and thanks to all of our webinar participants for your great questions! In case you’re sharing this with colleagues who couldn’t make it to the webinar, you can watch all of our webinars on-demand.

viewwebinarbutton