Usability Testing With Prototypes

Interactive prototypes are a great way to implement usability testing into the early stages of software development. But are you doing it right? Check out the what, how and why of usability testing with high fidelity prototypes here.

Running usability tests on an interactive prototype can avoid the kind of 11th hour headaches that make software designers’ and developers’ blood run cold: last-minute reworks, buggy launches and worse, loss of stakeholder confidence and funds.

But for usability testing to work, it has to be done right. Running tests on interactive prototypes requires a different approach to testing coded software. This article gives an overview of when, how and why you should run usability tests on interactive prototypes.

Why Usability test Interactive Prototypes?

Outstanding usability and user experience do not just happen by accident (although it would be convenient for time-pressed software developers if they did!). To create a product that demonstrates impeccable UX, usability testing, user scenarios and user requirements have to be incorporated into the design-development process from the outset. The last thing you want to be doing is fumbling through a last-minute usability test when your website or app is already coded and waiting to launch.

Prototypes provide early-stage opportunities to check functionality, design, UX, marketing and business strategy, before coding your final solution. It can be up to 100 times more expensive to change a coded feature than a prototype, according to Web Usability. And IEEE’s report ‘Why software fails’ points out that an estimated 50% of rework time could have been avoided had testing been done in the early design stages. Presenting usability tested and validated prototypes to potential funders can clinch investment that otherwise may have proved elusive. Some of our clients at Justinmind use a user-validated working prototype to get stakeholders onboard and to raise funds.

When to Run Usability Tests

There is no one single best time to run usability tests: some propose first-steps testing with paper prototypes (check out this article for a practical guide on testing with paper-pen prototypes), while others call for high fidelity prototypes complete with interactions, animations and test-on-device capabilities.

It is also possible, and sometimes desirable, to run multiple rounds of testing as you move through prototype fidelities. Usability testing at the wireframe stage can ensure testers focus in on the real nuts and bolts of information architecture and navigation flows, on which you can then iterate up to a high fidelity prototype and test again.

But this article will focus on usability testing with high fidelity prototypes, which allows you to observe users trying out complex interactions and tasks, test out solutions for problems you found earlier, and iron out any last minute glitches.

Prototype-specific Test Factors to Keep in Mind

Disclaimer – prototypes are not the answer to all usability prayers. When you are testing on prototypes, bear in mind any relevant limitations and, when looking at the data harvested, think about how any prototype-specific factors might have played a part. For example, if you have included lorem ipsum placeholder text in your prototype, this may have hampered some users’ navigation abilities; a roughly mocked up color scheme may lack the intuitive elements that users rely on. Keeping these potential issues in mind will help you both define your usability tests and understand the results better.

How to Run Usability Tests

Enough with the theory; here is the practical side to running usability tests on high fidelity prototypes. You will need:

  1. Sample users
  2. Interactive prototype with test-friendly features
  3. A facilitator
  4. Observers

1. Sample users

You should have your target user personae up and ready to go. Harvest users from those target groups with the help of user testing tools or your own resources. The key at this stage is the size of your sample tests; you might think it is wise to do usability tests on as large a sample as possible at once. The more users you canvas, the more glitches you will catch, right?

According to usability guru Jakob Nielsen, wrong: “The best results come from testing no more than 5 users and running as many small tests as you can afford.” The basic premise behind Nielsen’s assertion is that testing with no users gives you zero insights while testing with just one gives you an exponential increase on that. Each additional test user brings an increasing number of repetitious insights, and a decreasing number of original insights. Nielsen illustrates this with a nice graph curve. To be fair, he later revisited his findings in a less-cited yet equally important article.

So whatever your budget for user groups, it is better to distribute it across several smaller groups spaced evenly throughout the design process, rather than hedging all bets on one massive test-a-thon.

As well as testing your core target users, it can be very useful to round up some ‘rookie users’ – people you might not expect to use your app or software and find out if there is a market for your product beyond its perceived horizons. These rookies may tell you more about usability than your core market could.

Lastly, the way you communicate with your testers is the lynchpin to effective usability testing. They have to be able to figure out tasks on their own, so objective and neutral language should be used when giving tasks. A descriptive instruction (“you want to buy an XL sweater”) will always throw up more useful test results that a prescriptive instruction (“Go to the Menu tab and click ‘Sweaters’ from the dropdown”). Only descriptive instructions will let you know whether your users are intuitively navigating the interface or not.

2. Interactive prototype with test-friendly features

Depending on the type of tests, and product you are developing, your prototyping tool needs to dispose of different test-ready features. The minimum is scenario simulation, obviously. The simulate button should allow you to work the prototype directly on the browser, which lets you realize testing remotely or in-person. On top of that, realistic interaction, test-on-device capabilities and data table simulation are also vital if you want users to really experience your end-product. If you want to run formal usability tests you will probably need a prototyping tool that is integrated with a user testing tool.

3. A facilitator

Someone with skin in the usability game. The facilitator needs to know enough about users and their habits to sense when to go deeper into an issue during testing and to be able to manage groups of people who may have conflicting views of experiences.

4. Observers

It is appropriate for the design and development team to observe usability tests so they can understand user reactions without mediation.

How to Run Usability Tests with a Prototyping Tool

There are several different ways to go about usability testing, but all have some basic precepts in common:

Use realistic content

Users, both during testing and in the wild, rely on content to aid their decision making; generic placeholders are not intuitive to anyone. It is not necessary to go 100% for content-driven design if that does not make sense for your end-product. Take the time to add realistic content, both images and text, in your high fidelity prototypes prior to testing.

Use realistic data

As above, unrealistic data will at best be a distraction and at worst an obstacle. It does not take a lot of effort to fill in generic email addresses (no celebrity or piratical names, as UX Matters makes clear!) and it will incur more accurate results.

Design your tests thoughtfully

Think about timing. Anything between 5 to 10 clearly-defined tasks in a 60-90 minute session is typical. You should be testing the product’s main functionality – login procedures, conversion funnels and the like, before testing stardust sprinkled on top.

The way you word your tasks or questions is also crucial to getting useful results. Select judiciously between direct and scenario tasks (Tingting Zhao explains the difference concisely here), and closed or open-ended tasks: closed-ended tasks have only one possible successful outcome, whereas an open-ended task could could yield a couple of results and still be defined as successful. Susan Farrell explains how and when to leverage these different tasks here.

Whatever kind of task you go with, success criteria should be clearly defined and agreed upon for each. Woolly accounts of generally positive user experience are unlikely to impress stakeholders or potential investors. They will want to see the metrics and the success rates in black and white.

To moderate or not to moderate?

Prototypes, no matter how high the fidelity, are never going to be as comprehensive as the final app or website. Users will probably have questions about what you want them to do, or how they might do it. When testing a prototype, you will probably want moderators there to address these uncertainties and make sure you are not wasting time or money.

Learn from failures as well as successes

A user ‘failure’, i.e. a problem, can tell you more than a user success, so make a note of it. Documenting in detail which test participants had difficulties and at which junctures will help you iterate interfaces faster.

3 Usability Tests for High Fidelity Prototypes

Problem discovery

The classic usability test, problem discovery does what it says on the tin – it finds the most pressing usability speedbumps and (hopefully!) tells you how to fix them. It is like you are the doctor and your interface is the patient. A problem discovery test will help you make a diagnosis and prescribe a cure. You can do this more throughout an iterative design process for best results.

During a problem discovery test, the facilitator should monitor for issues and, when they arise, note and explore the actions and speech of participants. The facilitator should have the expertise necessary to analyze the outputs sensitively.

Eye tracking

Thanks to technology like Crazy Egg, it is possible not only to know where your users are clicking, but where they are looking as well (not always the same thing). Eye-tracking tests can tell you a lot about the psychology behind navigation flow and are particularly useful for understanding drop-offs and other barriers to conversion.

Competitive

You may think you have a great interface, but that is a lame proposition if you do not know what you are up against. Running competitive usability tests involves having two groups of participants carry out identical tasks with the competition’s apps or websites. You might need to have a larger study sample than Nielsen’s magic five to produce meaningful data, so competitive tests can be done remotely if your prototyping tool supports that.

Common Mistakes when Testing High Fidelity Prototypes

Intervening mid-test

It is a common situation – a test participant runs into a problem and is struggling to complete a task. The temptation on the part of the developer-observers is to stop the test or intervene. But watching participants struggle with your prototype could help you solve in-built problems. Asking users why they struggled or what they were trying to achieve will yield more interesting results than holding their hands through test.

Tackling bugs ad hoc

Fixing glitches without first assessing all the evidence is never a good idea: actions based on half-formed assumptions fly in the face of good usability test practices. A better approach is to use the prototype testing stage to note and analyze as many problems as possible without interfering; you will then be in a stronger position to start coding.

The Takeaway

There is no silver bullet for getting an issue-free software, app or site up and running. But these best practices when testing on interactive prototypes can reduce coded reworks and go a long way to ensuring you run meaningful and effective usability tests.

Source: Usability Geek

Author: Cassandra Naji

This entry was posted in Knowledge sharing and tagged , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *


7 + = fifteen

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>