|Human Performance at the Computer – Part 3: Perceived Performance|
|What Matters Most?|
|Your Doctor Says: Usable Applications Are Good for Your Health!|
|Highlight Topic about Human Performance at the Computer|
By Gerd Waloszek, SAP User Experience, SAP AG – September 1, 2009
In October 2009, Dan Rosenberg, senior vice president, SAP User Experience, gave a keynote speech at the German Quality Engineering 2009 conference (FQS-Forschungstagung 2009) in Frankfurt, Germany, titled "The Human Factor as an Integral Component of Quality Measurement." For this reason, he asked the Perceived Performance (PeP) team at SAP User Experience for input. This incident started an unconscious process in my brain that tried to pull two buzzwords together that had not had any association for me previously: "perceived" and "quality." I asked myself whether there is such a thing as "perceived quality." How does it relate to objective quality? And what does it mean to users? In this editorial, I will look for some first answers.
I would like to start this editorial by loosely defining what I mean with perceived and objective quality. Objective quality refers to objective criteria that can be measured to define the quality of a product or, more generally, an object. You can easily see that for certain objects such as works of art, there are no "objective" criteria – often we rely on the judgment of domain experts, for example, art critics. However, if experts are unavailable, we have to resort to our own judgments. And this is more or less what "perceived quality" is about: It is a subjective judgment of the quality of an object, be it a consumer product or a piece of software. As with perceived and objective performance (see Human Performance at the Computer – Part 3: Perceived Performance), there can also be a difference between perceived and objective quality: People may judge the quality of a product to be higher or lower than it objectively is – and, as we will see, they may do so for a number of reasons.
As with perceived performance or any other subjective judgment, the perception of quality – in short, perceived quality – determines users' mental attitude toward a computer system or software application, not the objective data. This attitude has an impact on a number of psychological aspects, such as satisfaction, motivation, trust, and the feeling of being respected – and these eventually determine whether users are satisfied with a system and are willing to use it.
A "natural" follow-up question is of course: What factors determine objective and perceived quality in the computer and software application domain? I will now look at some factors that, based on my own experience, can have an impact on both objective and perceived quality.
At SAP, there is a product standard for performance, indicating that performance is regarded as an integral element of software quality. Personally, I regard poor performance as the "number one" usability issue, and as such also as a quality issue (see my editorial What Matters Most?). On the other hand, there is the somewhat fuzzy concept of "perceived performance," which has repeatedly been demonstrated as different from objective performance criteria and measures. We have also learned that UI designers and developers use approaches or even "tricks" that help improve perceived performance. Approaches to improving perceived performance are, for example: Provide feedback if performance goals cannot be met, return control to users as soon as possible, let users resume their work as soon as possible, and ensure that users can finish their tasks successfully (see the highlight topic Human Performance at the Computer on this site for details). Users definitely regard poor performance (or responsiveness) as a quality issue and good performance as a desirable goal.
In my editorial What Matters Most?, stability comes in as the "number two" usability and thus quality issue: System and application crashes have a number of adverse impacts on users. In particular, they lose time and may also loose valuable work. As a result, they often adopt inefficient strategies to circumvent losses. Even after the system has stabilized, they tend to keep these habits, thus not performing to the full. In addition, users do not trust the system, and depending on the severity of the stability issues, may even begin to hate and avoid using it. All this leads to very poor perceived quality. And objectively, the quality is, of course, also low if not disastrous.
Most people will probably first refer to an application's look, or as professionals would say its visual design, when they assess software quality. Repeatedly, I have found that the primary battles for an application concerned the visual design. The reason for this behavior is simple: The visual appearance of an application catches the eye first, and "look" is an area in which nearly everybody feels qualified to give an opinion. Thus, the professional quality of the visual design and the overall structure of screens and Web pages have a strong impact on perceived quality, particularly as long as users see only screens and do not use the system. While there are no objective measures for the quality of a visual design as there are for measuring the responsiveness of a system, we might concede that agreement among professional designers can be regarded as equivalent to an "objective" measure. However, I was often struck by the difference between what designers would call a good design and what users prefer (for example, the more animated icons are used, the better).
The quality of on-screen texts and messages is another area in which everybody can "join in." Technical language or computer jargon, incorrect grammar, and typos can be detected by anybody and are known to have an impact on perceived quality. Researchers and professionals have repeatedly found that these issues decrease many users' trust in a computer system. Users also tend to apply these deficiencies to the system as a whole: If a product is shipped with low quality texts, then they conclude that the whole system will probably also be of poor quality. However, not all users detect textual issues and not all of them are "picky" about them.
Ease of use ideally implies that users care little about a system: The system seems to be invisible to them and they are deeply immersed in their actual tasks. This may sometimes lead to the paradox that it will be difficult to elicit "high quality" judgments about the system from users because they are not even aware of it. On the other hand, cumbersome interaction and missing or hidden functionality – thus, the opposite of ease of use – means users immediately perceive the system's quality as poor.They are quick to blame developers, and user interface designers in particular, for not caring about their needs.
Security and privacy issues are much more subtle than the issues covered so far. Nevertheless they are highly relevant, particularly to Internet users. Users may ask themselves: Is this transaction safe? Can my credit card number or other secret code be intercepted by spy software? Are the e-mails and attachments that I receive safe? The list of security concerns is long and has been a permanent topic at recent UI design conferences. Compared with more visible issues, such as the quality of the visual design or texts, users feel much more insecure and on their own here. Therefore, it is the designers' obligation to establish a context that helps users build up trust in the system and perceive it as of high quality, in this case, as ensuring security and privacy.
The discussion above reveals that the objective as well as perceived quality of software products is determined by a variety of factors and not only by the system's "pure" usability. Some factors are typically given more attention; others, such as screen texts, are traditionally neglected. The latter is, however, a dangerous mistake because text issues, for example, can be detected easily by users and may severely undermine perceived quality. I already mentioned that users tend to apply such issues to the overall quality of a software application, even though this may not be justified.
As a first lesson from my "tour d'horizon" of software quality, I have come to the conclusion that it is a fatal error to assume that the development or design team can afford to neglect certain quality aspects. Professional visual design and texts, for example, are essential to achieving good perceived quality. Users feel respected, appreciate the care hat has been taken by the development team, and are motivated accordingly. Poor perceived quality, on the other hand, undermines users's trust in an application and may seriously reduce their satisfaction and motivation, which hampers user productivity – our ultimate design goal.
Finally, it will be interesting to observe whether development teams will learn to use tricks for improving perceived quality as they already do with perceived performance. Or are they already doing so, and I haven't noticed?