Get 3.9% APR, Get Instant Credit - Click here!

PC Magazine

  PC Tech

PART 1
Software Usability Testing

Introduction

Types of Usability Testing

The Usability Testing Lab

Real-World Usability Results

The Good and the Bad of Usability Testing



X10.com - The SuperSite for Home Automation!

NextCard Internet Visa - Apply Now

 
  Categories:
Software

Part 2:
Usability Gaffes: Things That Make You Go Arrrgh!

Part 3:
Talkback:
Tell us about usability gaffes you've found

Related Stories:
1999 Software Sneak Preview

Next: The Usability Testing Lab

Making Software Easier Through Usability Testing
Types of Usability Testing

Continued from Introduction

This column will focus on software testing conducted in a usability-testing laboratory, but other types exist. Surveys and questionnaires can help determine how users perform specific tasks, how they engage interface elements such as icons and menus, how they work with help systems and wizards, and how satisfied they feel about these things. Related to these methods are focus groups, widely used in product design of all types, in which a moderator orally questions a group of users. Another type of testing is field observation, where the tester watches people use the software in their work environments.

Primarily, however, usability testing is most effective when it attempts to determine how much time a user needs to complete a task or series of tasks and how difficult the tasks seem. This requires a controlled testing environment, because in the case of questionnaires and focus groups, users' memories are unreliable, and in the case of field observation, users face real-world distractions. Controlled tests let designers and developers see how well a program works in and of itself.

A controlled usability test places a single participant (or on rare occasions, two participants) in a particular location where he or she is given a document, or task list, outlining a set of tasks to perform in a particular order. For example, a task list might ask a participant to save a document or print a document using landscape orientation. The first requires finding a menu item, an icon, or a hot key, then working with the file/save dialog. The second requires the same, except that the concept of landscape orientation becomes an issue. This test could be designed with visual cues as well, showing the participants an example of a landscape orientation and asking them to print a specific file to resemble it.

A set of linked tasks might ask participants to start a spreadsheet table, enter specific data, perform specific calculations, and print in a specific format. Participants complete each task individually, but by linking them, developers can determine the usability of a procedure that people might actually perform in their jobs. In real life, after all, tasks don't take place in isolation. They're always part of a larger procedure.

Developers, of course, aren't interested primarily in the test itself, but in its results. And that raises the question of exactly what these tests can show them. The quantitative data that testers can gather is time per task or time per task set, number of errors, and requests for help. Most important, developers can learn how long it took each participant to conduct the tasks specified in the task list. Someone with a stopwatch can record relatively useful data in this fashion, but as we'll see below, more precise measurements can be captured with more sophisticated equipment.

But as much as time-per-task data will inevitably help in revising the product design, there's more to these time measurements than is immediately apparent--this is where the importance of observation comes in. Observers record precisely at which point in the task the participant slowed, stopped, got sidetracked, or tried something new. Observers also encourage users to talk aloud when working through the tasks (talk-aloud protocol), and they record what users say and where in the task they say it. They also record gestures, ranging from hands thrown up in frustration to hands thrown up in victory, and everything in between.

The idea is to give program teams highly specific user feedback about their design. Instead of postrelease user feedback, which tends to be much less focused ("I couldn't get a graphic into a cell--took me hours!") or completely unfocused ("It just stinks!"), these tests show where user satisfaction is likely to suffer unless the product is altered. If usability testing begins early enough in the product cycle, these potential problems can be isolated in time to change the finished design. In the major software houses, in fact, usability personnel work with development teams from the earliest design stages of the product. Still, even in these cases, a great deal of testing takes place after the product actually works, and by that point features and interface design are pretty well frozen in place. In that case, the test results will affect the next full or interim version of the software.

One popular, inexpensive, and obvious method of at least partially controlled usability testing is for company employees to play the role of test participants and work through a task list. This arrangement can be informal, with the participant doing all the timing and recording, or quite formal, drawing participants from a wide variety of potential users inside the organization. Clearly, this method can be problematic (one employee disparaging another's work brings all sorts of difficulties), but if handled well it can be valuable too.

The most sophisticated method of usability testing occurs in a usability lab. Here, controlled tests of specific tasks can take place, with full observation, recording, and reporting facilities. The obvious disadvantage to this method is cost, in both equipment and personnel, but if the whole point is ensuring user satisfaction, hence current and future sales, the benefits might well exceed this disadvantage. We'll examine a typical usability testing lab next.

In all types of usability testing, one thing must be kept in mind: The software, not the participant, is being tested. Participants must be told in advance that they are indeed participants, not subjects. Because the point of usability testing is to determine where the product's design fails, some tests will almost certainly lead the participant toward failure. It's easy for the participant to feel stupid.

It's just as easy for the product developers to agree. Their brainchild, after all, is under scrutiny and criticism by someone far less technically knowledgeable. Usability management must take this possibility into account, but ultimately this potential stops being a problem only when usability testing is built into the design cycle and respected at all levels.

Next: The Usability Testing Lab

Published as Tutor in the 10/6/98 issue of PC Magazine.

 
 SPONSORED LINKS
@Backup   Your Solid Online Backup Plan. Download Now.
Services   9c/MINUTE LONG DISTANCE, 5c/MINUTE ON SUNDAYS!
STORAGE   Quantum means non-stop business, 24 hours a day
Software   X10.com -- The SuperSite for Home Automation
Books   Bargain Books up to 90% off at barnesandnoble.com
 ZDNET FEATURED LINKS
Downloads   Check out the best new downloads in Reviewer's Raves
Bargains!   Shop the Basement for best buys on computer products
Free Help   Got computing questions? ZDHelp has all the answers!
 MAGAZINE OFFERS
Free Offer   Get a FREE SUBSCRIPTION to Inter@ctive Week

TOP
Copyright (c) 1998 Ziff-Davis Inc.