Books & People

back Books & People Overview

Books, People


Book Reviews


Book Review: Handbook of Usability Testing (2nd Ed.)

Book | Authors | Review

By Christine Wiegand, SAP AG, SAP User Experience – April 2, 2009

This review takes a personal look at Dana Chisnell's and Jeffrey Rubin's book Handbook of Usability Testing (2nd Ed.).



Cover of Handbook of Usability Testing (2nd ed.)     

Dana Chisnell & Jeffrey Rubin
Handbook of Usability Testing: How to Plan, Design, and Conduct Effective Tests (2nd ed.)
Wiley, 2008
ISBN-10: 0470185481, ISBN-13: 978-0470185483

Usability: Testing



Photo of Dana Chisnell


Jeffrey Rubin, author of the first edition of this book, has more than 30 years of experience as a human factors and usability research consultant and lecturer. A pioneer in the field of usability testing, he has consulted for leading companies worldwide.
(From back cover of the book)

Dana Chisnell is an independent user researcher and usability consultant. She has done usability, user interface design, and technical communications consulting and development since 1982.
(From back cover of the book)



Fourteen years on, Jeffrey Rubin has published a revised second edition of his Handbook of Usability Testing. Having retired from usability consulting after more than 30 years, Jeff recruited Dana Chisnell as a co-author for this second edition.

Why a New Edition?

In the preface to his new book, Jeff explains the reasons for publishing a revised edition.

In his opinion, the world of usability testing has changed dramatically since he published the first edition: The term “user experience” has turned into a rather commonplace term and “usability testing has gone mainstream.”

Other notable changes, according to Jeff, are the Internet explosion, the transportability and miniaturization of testing equipment, the countless methods of data collection (remote, automated, digitized), and the constantly shrinking life cycle for introducing new technological products and services. But the overall rationale for the book remains the same: The number of trained usability professionals available still does not meet the demand for usable products. Therefore, people with little formal training in usability engineering or user-centered design are being asked to perform tasks for which they are unprepared.

In their book, Jeff and Dana want to “help bridge the gap in knowledge and training in usability engineering or user-centered design by providing a straightforward, step-by-step approach for evaluating and improving the usability of technology-based products, systems, and their accompanying support materials.” In my conclusion, I will ask whether the book lives up to these promises.

Additions and Revisions

The following additions and revisions have been made:

  • The book has been restructured and simplified into three parts instead of four.
  • Many chapters have been reorganized to align them more closely to the testing sequence.
  • The layout, format, and typography have been improved.
  • Many of the examples and samples that preceded the ascendancy of the Internet have been updated.
  • Drawings have been improved.
  • There is also a new ancillary website,, containing supplemental materials such as:
    • Updated references
    • Books, blogs, podcasts, and other resources
    • Electronic versions of the deliverables used as examples in the book
    • More examples of test designs and, over time, other deliverables contributed by the authors and others who wish to share their work


The authors address three groups of readers:

1. Non-usability professionals who have to conduct usability tests in their daily work – such as product developers, engineers, system designers, technical communicators, and marketing and training specialists.

2. Usability specialists who are perhaps new to the discipline, including:

  • Human factors specialists
  • Product and system development team managers
  • Product marketing specialists
  • Software and hardware engineers
  • System designers and programmers
  • Technical communicators
  • Training specialists

3. College and university students in the disciplines of computer science, technical communication, industrial engineering, experimental and cognitive psychology, and human factors engineering, who wish to learn a pragmatic approach to designing usable products.


The book opens with a foreword by Jared M. Spool and a preface by Jeff. It is divided into three main sections:

  • Part 1: Overview of Testing
  • Part 2: Basic Process of Testing
  • Part 3: Advanced Techniques

In the following, I will summarize the hints, facts, and guidelines that interested me most while I was reading the book:

Part 1:

According to the authors, it is important for the reader to comprehend the basic principles of user-centered design (UCD) in order to understand the context for performing usability tests. Therefore, Part 1 covers the definition of key terms and discusses some techniques for incorporating usability into product lifecycle management – such as ethnographic research, participatory design, focus group research, surveys, walk-throughs, card sorting, paper prototyping, expert or heuristic evaluations, usability testing, and follow-up studies.

Chapter 3 of Part 1 outlines the methodologies and usages of different types of tests: exploratory or formative studies, assessment or summative tests, validation or verification tests, and comparison tests.

The last chapter explains the basics of moderating a test, investigates several alternatives for acquiring test moderators from inside and outside the organization, and discusses the ideal characteristics of an effective test moderator.

Valuable tip for test moderators: Links to the codes of ethics of the Usability Professionals’ Association and the Human Factors and Ergonomics Society are available at

The authors also suggest considering a list of “what not to do” while conducting a usability test, and describe the most common errors that test moderators make.

Part 2:

This part of the book covers the “how-to” of testing in a step-by-step fashion. In Chapter 5, the authors point out the need for accurately describing what testers hope to learn – the so-called “research questions” – while developing a test plan. The sample test plan in the download section of the companion website contains sample research questions. If you are planning a test session at a participant’s workspace, you will find helpful checklists in Chapter 6.

On the subject of finding and selecting test participants, the section about screening questionnaires might well be of interest to readers. It contains a sample screening questionnaire and recommends sources for finding suitable test participants. The authors mention an interesting website for postings, Craigslist (, which I had not encountered before reading this book. They also describe the recruiting process – including scheduling – in a nutshell.

The guidelines for observers posted in the download section of the companion website might be helpful to anybody who needs to conduct a usability test. I found the sample orientation script for a basic test useful as well.

The authors mention a data collection tool called the Usability Testing Environment (UTE), which is available at This tool enables usability tests to be executed with limited resources by allowing the automated collection of interaction data like clicks, keystrokes, and scrolling. We have not yet used automated test tools within my department, but we are working with teams that include a note-taker and a moderator for each usability test.

Debriefing is important for drawing conclusions and understanding the results of usability tests. Basic debriefing guidelines are presented and discussed in chapter 10. In addition, the “Devil’s Advocate” technique is described as an appropriate means of encouraging participants to criticize the product and evaluate controversial product features. The authors position this technique as their favorite advanced debriefing and data collection technique.

In Chapter 11, readers will find a step-by-step procedure for compiling data. The authors recommend reading one of the introductory books on probability and statistics, such as Statistics for People Who (Think They) Hate Statistics by Neil J. Salkind or Statistics in Plain English by Timothy C. Urdan.

Once a usability test has been concluded and the findings formulated, a summary should be communicated in the form of a final report. Report formats and content are discussed in Chapter 12, which includes examples such as an executive summary.

Part 3:

The final section of the book discusses variations in the basic usability testing methods for special groups such as disabled people or older adults, and looks at ways of expanding the practice of usability and user-centered design.  

Dana and Jeff also list useful sources relating to usability principles and guidelines, including books, conferences, journals and newsletters, usability-related societies, and strategies for expanding UCD throughout the organization.


Seen as a whole, this book is, in my opinion, primarily a beginners’ guide that helps readers gain an overview of usability testing. However, it also contains a wealth of useful hints for more experienced usability professionals. Readers can simply turn to the chapter that describes the testing phase they are currently in and use one of the many checklists or step-by-step guidelines to help them proceed with testing.

For samples of test materials and other valuable downloads, readers should also visit the book’s companion website.

All in all, I fully agree with Jared M. Spool, who states in his foreword to the book: “They’ve done a great job of collecting and organizing the essential techniques and tricks for conducting effective tests.” Therefore, I recommend the book to everyone who is involved in usability testing. It definitely merits its title as a “handbook” of usability testing and delivers on its promise to successfully bridge “the gap in knowledge and training in usability engineering or user-centered design”.



top top