Stories

back Stories Overview

User Experience at SAP

Projects

ASUG

Conference Reports

Future Scope

 

CHI 2002 – Changing the World, Changing Ourselves

By Gerd Waloszek, SAP AG, SAP User Experience – Updated: January 27, 2004

This article continues the series of my personal CHI reports. The CHI 2002 conference was held in Minneapolis, Minnesota, and had the slogan "Changing the World, Changing Ourselves." So let's walk through some of the changes. But first of all, I will briefly introduce the CHI and my visit's focus and highlights.

CHI 2002 logo CHI 2002 logo
CHI 2002 logo

Figure 1: The CHI 2002 logo - animated for better usability?

What is the CHI?

Most readers will probably know that the conference on computer-human interaction (CHI, also known as human-computer interaction = HCI) is the world largest and most important usability forum. It is held each year by the HCI section of the ACM (more precisely: ACM SIGCHI = ACM's Special Interest Group on Computer-Human Interaction; ACM = Association for Computing Machinery).

Citius, Altius, Fortius???

While in the past, the CHI conference grew bigger and bigger each year, this year's CHI was about a third smaller than the one last year. Many reasons for this shrinkage have been given but I will not contribute to these speculations.

Fifth-Street Towers, Minneapolis

Figure 2: Minneapolis' most famous skyscraper, the Fifth-Street Towers, in the evening sun

My Focus

This year, I focused on accessibility (or universal access) and future interfaces. My report reflects some of the "news" I collected on the CHI.

Overview: Spiritual Instead of Physical Highlights?

Last year, my CHI highlights were physical devices (Tablet PC and IBM high resolution LCD screen); this year's personal highlights were more of a "spiritual" nature:

  • Gregg Vanderheiden's and Kevin Warwick's visions of thoughts or nerve signals commanding physical interfaces – including Kevin Warwick's personal experiments with chip implants
  • Stelarc's experiments with his body – including performances where his body is remotely controlled or where his body controls a device
  • Rosalind Picard's and Cynthia Breazeal's investigations in emotional computers and sociable robots

I did find one "physical" highlight:

  • The universal remote console (URC) concept

I explain below why I find this concept so remarkable and I add some comments on future interfaces – such as intelligent furniture, windshields, clothes, and glasses, to name just a few...

My highlights follow the CHI slogan "Changing the world. Changing ourselves." For now, forget the current Web hype – you might just be surprised about what is to come...

Links are given in the References at the end of this article.

 

Part I: Changing the World

Universal Remote Control

The idea of a universal remote console (URC; see references) may not sound very spectacular but it promises to change the access to technical devices for many people in the future. The basic idea behind the URC is to define a universal protocol (Alternative Interface Access Protocol = AIAP) that enables any device to command any other device via remote signals. This approach is especially promising for disabled and elderly people but brings hope to all those many people who cannot program video recorders, use copy machines, or handle "intelligent" microwave ovens. With an URC, people can use devices of their choice as remote controls for other devices. For example, a blind person could use a refreshable Braille display as a remote control for a TV set. Sighted people could program their PDAs for this purpose. Gottfried Zimmerman and Gregg Vanderheiden gave a short demo of the principle but could not give an answer on when the industry will adopt this idea and offer devices that utilize the AIAP. (*)

*) Note: NCITS (National Committee for Information Technology Standards; www.ncits.org) has established a technical committee called V2 for working on a "universal remote console" specification as an essential part of a to-be-developed "Alternate Interface Access" standard (Alternative Interface Access Protocol = AIAP).

Source: Paper/Demo "Prototype Implementation for a Universal Remote Console Specification " by Gottfried Zimmerman, Gregg Vanderheiden, Al Gilman, Trace R&D Center, University of Wisconsin-Madison, USA. Tutorial "Flexible, Accessible Interfaces More Usable by Everyone" by Gregg Vanderheiden, Trace R&D Center, University of Wisconsin-Madison, USA, and Shawn Henry, Optavia Networks, USA.

Looking into the Future...

Now let's look at some new technical developments.

Ambiente and RoomwareŽ – Intelligent Buildings and Furniture

The Ambiente project by the Fraunhofer Institute explores the role of offices and buildings as activity and experience spaces. RoomwareŽ is part of the Ambiente project and explores computer-augmented room elements like doors, walls, furniture (for example, tables and chairs) with integrated information and communication technology. A cooperation with the German furniture manufacturer Wilkhahn added a more stylish look to the second-generation RoomwareŽ components. The networked environment is managed through the hypertext software Beach that connects all room elements and allows the easy exchange of computational objects between different RoomwareŽ components. The software is used with pens and gestures. The current steep price tag will however prevent RoomwareŽ from invading many households.

Roomware in action

Figure 3: The newest generation of RoomwareŽ in action (from Ambiente Website)

Source: Paper/Demo "RoomwareŽ – The Second Generation" by Norbert Streitz, Thorsten Prante, Christian Müller-Tomfelde, Peter Tandler, Carsten Magerkurth, Integrated Publication and Information Systems Institute, Germany

e-Windshield – Media Windshields

As most people spend a lot of time inside their cars, why not turn cars into computing and entertainment environments? The downside is, however, that thanks to this new technology we will spend even more time in our cars. For example, instead of sitting in front of a computer in the living room, we can use our cars for connecting to the Internet. This vision is explored with e-Windshield (also called media windshield): This prototypical car windshield acts as a projection screen, which can be used in a variety of scenarios:

  • In car – driving: Sections of the windshield might act as a trip computer and navigation system; the windshield might also intelligently keep glare from the driver
  • In car – not driving: The windshield as a whole might be used as screen for an entertainment system or as a computer screen (internal)
  • Not in car: The windshield might be used for providing information to the public and for advertising (external); parked cars might be synchronized to form large information displays or tickers

Prototype of the e-Windshield

Figure 4: A prototype of the e-Windshield (from Ernesto Arroyo, MIT)

Source: Paper/Demo "E-Windshield: A Study of Using" by Ted Selker, Winslow Burlson, Ernesto Arroyo, MIT Media Lab, USA

Intelligent Clothes and Glasses, Digital Jewelry, Smart Dust

Now our trip into the future of buildings, rooms, furniture, and cars finally narrows down to things we wear, such as clothes, jewelry, and glasses. A couple of years ago, Philips announced the arrival of "intelligent" clothes with built-in electronics ("wearable electronics"). Curiously, I could hardly find any information on this project on the Web, even though the promised date has long elapsed. On the other hand, Infineon presented electronic circuits for use in "smart clothes" this April. Wearable electronics might be used for integration of audio equipment, weather information systems, biometric sensors, and many other things into clothes. In a similar vein, digital jewelry has been explored by IBM and, among others, iButton. Applications are similar: Identification (access control, electronic identity paired with medical data or time and attendance), external memory, weather station, cashless payments, and so on.

IBM (Vision Pad), the MIT Media Lab (Remembered Spectacles), and others are investigating "intelligent" glasses, which might act in a similar fashion to the e-Windshield in the "driving" scenario. These glasses are used to augment vision through a virtual display, which overlays normal vision and presents information about objects in the vision field. In one prospective usage scenario, people might be "annotated" with information (labels, tooltips), such as their name, position, a short curriculum vitae, or other personal data. Such glasses might also be used as a navigator or translator. In the latter case, street signs and advertisements in, for example, Japanese might be translated to the wearer's native language. Another prospective usage scenario is in giving instructions to service people who repair copiers, printers, or other complex devices.

Finally, smart dust allows the world to be "penetrated" with small identification and location sensors. These sensors identify objects, report their location, and have the potential to report the state if more memory is built-in. Possible applications are of a military nature, of course, but the author Kris Pister (University of Berkeley, CA) also conceives many everyday life applications, such as virtual keyboards, inventory control, smart office spaces and conference rooms, interfaces for disabled, and so on. Recently, I learned from Brygg Ulmer from the Tangible Interfaces Group at MIT that they use smart dust too for their "physical" interface objects.

Source: Tutorial "Flexible, Accessible Interfaces More Usable by Everyone" by Gregg Vanderheiden, Trace R&D Center, University of Wisconsin-Madison, USA, and Shawn Lawton Henry, Optavia Networks, USA

Investigations in Emotional and Sociable Computer Systems

There has been a long-standing debate among Artificial Intelligence people and cognitive scientists on whether computer can have emotions. Since the publication of the book "The Media Equation" by Byron Reeves and Clifford Nass, we don't need to worry too much about that issue: People treat computers like humans, anyway, irrespective of how human-like they are. Here, I report the work of two researchers at the MIT who investigate the role of emotions in the interaction between humans and computer systems.

Rosalind Picard

Figure 5: Rosalind Picard tries to classify human emotions (from R.P. homepage)

Rosalind Picard works on wearable computers and new generation interfaces. She conducted research on the automatic recognition or classification of human emotions from human faces, for example, of confusion and interest. In one study, people expressed "artificial" emotions; 80% of these were correctly classified by a computer system. She also measured the mouse pressure and found that people increased it after experiencing a usability bug. As her systems are also able to detect expressions of confusion and interest, the way is open for computer systems that can assist humans much more intuitively if bugs appear or users are confused – and keep silent if users are highly interested.

Cynthia Breazeal and her 
                    Kismet robot

Figure 6: Cynthia Breazeal and her Kismet robot (from C.B. homepage)

Let's combine that with Cynthia Breazeal's investigations with a "sociable" robot named Kismet. This robot mimics a human face and can express emotions in reaction to user behavior or instructions. Thus, a computer system could not only detect when a user is confused, for example, because of a usability bug – now it can also express it's pity for the user. Both the user and the computer can then bemoan the problem, and create a climate of mutual understanding. This empathy does not solve the usability problem but at least it can let the problem appear less frustrating to the user.

Until that turns into reality, we could at least modify current harsh error messages, such as

  • "Syntax error" or "Error #123456, call system administrator"

to more empathetic messages, such as

  • "The OS Z system monitor regrets to have to indicate a 'syntax error' (please, call the system administrator for details at #9876). We apologize for the inconvenience caused by the error and hope that you will have a pleasant continuation of your work. Thank you again, for working with OS Z."

Source: Panel "Future Interfaces: Social and Emotional"

 

Part II: Changing Ourselves

Thoughts Command Physical Interfaces

Roald Dahl tells a story about a couple, Mary and William, in which the husband is reduced by surgery to a brain with one eye. The brain swims in a nutritional liquid and can perceive its environment. Roald Dahl's vision has been extended in many ways. Artificial Intelligence pioneer Marvin Minski, for example, would like to have his brain dumped to a CD (sort of...) so that he can "survive" the physical deaths of his body until he can be revived in new "eternal" (probably, artificial) body.

Other visionaries let the brain interact with the environment by coupling the nervous system with electronic circuits. Obviously, we are at the doorstep of commanding our environment with our brains. Scientists, such as Kevin Warwick (and in a certain way, Stelarc, too) explore transplants that connects the nervous system to electronic circuits, which command electronic devices. Currently, there is still little understanding of what those peripheral nervous signals, that Kevin Warwick is exploring mean and effect. But our understanding of the signal will undoubtedly grow in the near future. In addition, connections will be made to brain areas where there is better knowledge of the meaning and effects of neural signals. (Kevin Warwick also plans to explore this option soon.)

Kevin Warwick on video

Figure 7: Kevin Warwick appeared via QuickTime video and cell phone

In laboratory experiments, monkeys are already able to manipulate a cursor on a computer screen, as Gregg Vanderheiden explained in his tutorial.

Hans Moravec goes a step even further in his book "Mind Children." He claims that robots will take over power in about 30-40 years. He is, however, unsure, whether the robots will still keep humans as "pets" or simply get rid of them, instead. This way, Moravec even destroys the vision of brains commanding the world. In his opinion there will no longer be a need for "natural" brains – they are too rigid for evolution... Moravec's claims are based on Moore's law, which predicts an exponential increase in the complexity of computational systems over time. I do not believe that Moravec's vision will come in the time frame he suggests. In my opinion, physical complexity alone does not make intelligent systems – appropriate algorithms are also needed. In addition, Artificial Intelligence made a lot of promises that never came true. For example, reasonable automatic translation has been one of the starting points of AI but is still not on the horizon. Even speech recognition is still restricted to special domains as I have learned in a panel about speech interfaces.

Source: Panel "Future Interfaces: Social and Emotional"; Panel "Getting Real About Speech: Overdue or Overhyped?"; Hans Moravec was cited by Gregg Vanderheiden

The Body Separated from the Personality

So, if some people try to come closer to the old vision of the brain commanding the environment and doing without the body, why not separate the body from the mind and treat it as an external, even "obsolete" entity? The Australian artist Stelarc explores just this option with his performances.

I personally would not regard Stelarc's presentation and performances as a "highlight" – maybe my moral preconceptions forbid this. But the idea behind his explorations and how it complements those AI visions of an eternal brain and an obsolete body not only disturbs me but also grabs my attention. And this is what a "highlight" should do – at least on a user interface...

Stelarc with student volunteers

Figure 8: Stelarc with student volunteers demonstrating involuntary arm movements

Source: Closing Plenary "Interfaces for Alternate, Automated and Involuntary Experiences – Prosthetics, Robotics and Remote Control Operation Systems" by Stelarc

 

Final Words

This is probably not the type of CHI report most readers expected. I should have, therefore, added a final caveat to the conference slogan: "And Changing the CHI Reports."

 

References

CHI 2002 and Minneapolis Links

Links

Please, note that I cannot assure that the links below will function in the future.

Books

  • Hans Moravec (1998). Mind Children: The Future of Robot and Human Intelligence. Harvard University Press.
  • Rosalind Picard (1997). Affective Computing. MIT Press.
  • Reeves, Byron; Nass, Clifford (1996). The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places. CSLI Publications.

 

All photos, unless otherwise noted, by Gerd Waloszek (taken with Minolta Dimage 7).

 

To top top