Note: This was published as as the last of my bi-monthly column in the ACM CHI magazine, Interactions. I urge you to read the entire magazine -- subscribe. It's a very important source of design information. See their website at interactions.acm.org. (ACM is the professional society for computer science. CHI = Computer-Human Interaction, but better thought of as the magazine for Interaction Design.) This article was printed in Interactions, volume 17, issue 6.
Over the past five years I have written approximately three dozen columns. What has been learned? What will come? Obviously it is time for reflection.
My goal has always been to incite thought, debate, and understanding. Those of us in the field of interaction, whether students, researchers or practitioners, whether designers or programmers, synthesizers or analyzers, all share some common beliefs and ideals. One of my jobs is to challenge these established beliefs, for often when they are examined, they rest on an ill-defined platform, often with no supporting evidence except that they have been around for so long, they are accepted as given, without need for examination. We need a rigorous foundation for our work, which means to question that which is not firmly supported by evidence, if it appears obvious. Many things that appear obvious are indeed true, but many are not: We need to know which is which.
I've questioned the role of usability in products, suggesting that although it is important, it is never the most important characteristic. I questioned the time-honored prescription that we should design products by first engaging in user research, then a period of iterative rapid prototyping, assessment, and reconceptualization. Although this sequence seems eminently logical and necessary, it seldom works in the time- and money-constrained world of product development. The research community has argued for the critical importance of doing research first, but with continual failure. This should be a signal that the logical time course simply cannot be accommodated by the reality of product schedules, so that other solutions and approaches must be developed. We cannot continue to argue that everyone else is wrong.
The research-practice gap is real, I have argued, and we should stop pretending that researchers and practitioners speak the same language. Human-centered design, I argued, can be harmful, for among other things, it is really a sophisticated design by committee, guaranteeing continual good results. But what if the goal isn't merely the good but rather the great and wonderful? Then HCD gets in the way. Great designers do not use HCD. And as a corollary, great designers have both great successes and great failures. HCD promises to avoid both these extremes.
Everyone, it seems, cries out for simplicity: we need simpler products, goes the cry. But when I examined simplicity I found that even those who cried the loudest actually complained when given simple products. Simplicity, I have argued several times, should never be the goal. The world is complex and so too must be our tools. Complexity is good. The quest for simplicity is misguided. What people need is understandable products. They need the power that complexity provides, but they seek the comfort zone of things that can be understood. It is confusion that they dislike. This is a great finding for it puts the major onus on designers: transform complexity into understandable products. Things that are understandable are judged to be simple. (This argument is complex enough that it has led to a book, Living with Complexity.)
"Affordances" is a useful concept gone wrong, a point I argued several times. We must distinguish between an affordance (which is an amazingly ill-understood relationship) and signifier, which is closer to what designers need to attend to. I've made that point in my columns and in "Living with Complexity."
The Lack of Feedback
I have examined many themes in my many columns. Have they made a difference? How can one tell? If I am to judge by the paucity of email I receive, the infrequent citations, even in blogs, and the need for me to repeat many of my arguments year after year, I would have to say that the columns have not had any impact. Is this due to the inelegance of the columns, the passivity of this audience, or perhaps the nature of the venue itself? I reject the first reason out of self-interest and the second out of my experience that in person, you are all a most vocal group. That leaves the third reason.
Interactions, the magazine, plays an interesting, non-understood role. Over the years it has become a substantive, interesting publication, more and more focusing upon the larger issues of the field, emphasizing design more than analysis, and providing thought-provoking pieces for ... for whom? Interactions occupies a weird space in the world of publications. It is primarily available to members of CHI, which automatically limits its appeal. For years, none of the articles were available on the open, free Internet, and even today, only a few are available. Search and until recently, you shall not find.
I recently became a columnist for Core77, an open, free internet magazine for industrial designers, and my first post received more responses, blogging comments, and consideration than the total of the responses during my five years of columns in interactions magazine.
It is time for ACM, a non-profit organization dedicated to the free dissemination of knowledge, to stop hiding behind paid subscription walls, and get their stuff out in the open for everyone to share. ACM - and many scientific societies - have lost track of the open, knowledge-sharing role of science and instead have been governed more by old-fashioned media rules than the modern world of freely accessible media.
Interactions fails to impact the larger world of research outside of ACM's CHI because of its failure to be open and accessible. At the same time, it fails to impact the academic research world because it is neither peer-reviewed nor the repository of the weighty, careful experimental, rigorous knowledge required by promotion committees in universities. So what is Interactions? Neither a serious scientific publication nor a popular, influential one.
This field has matured since its early beginnings at the dawn of the personal computer. Where are we going now?
This is a particularly fertile time for the industry. Microprocessors are everywhere, embedded, invisible, ubiquitous. The age of the invisible computer is here. We interact with machines all the time, often without being aware of the interaction (which is what superior design should aim for). We moved away from command line interfaces to graphical displays where everything was visible, where feedback was immediate, and where graphical design could take advantage of display capabilities to present attractive, understandable interactions that were both pleasant to use and pleasant to behold. Now we are moving away from the heavy reliance upon screen, keyboard, and pointing devices toward physical devices, operated by gesture, location, motion, and other physical movements. Cloud computing makes our information always available, no matter where we are, what device we are using. Powerful, multiple processors coupled with unimaginably large memories allows for the recording of everything and the analysis of everything. Artificial intelligence is back, this time leveraging statistical analyses of trillions of bytes, terabytes, and beyond.
Everything is always available, except when it isn't. Cloud computing just failed me when I was at conference in Toronto, unwilling to pay the exorbitant charges levied by service providers for data connection while roaming. Even the auditorium of the University of Toronto's medical school where we were meeting failed to have wireless Internet connectivity. So our smart cellphones were useless. No restaurant guides, no maps, no location-based services. For many of us, not even access to email on our phones. Politics and corporate greed trumps technology, once again. And in those places where cloud computing does work, making everything always available, the crooks, thieves, and troublemakers also have discovered that our data are available, so that much of our email today is fraudulent, some even destructive. What will the future bring?
Moving Forward: The Challenge
The one thing about our profession is that the need for our skills will never go away. Solve one set of problems, and the next arises. Get one industry going strong, and the next industry beckons. From oil well spills to airplane crashes, from computer screens to kiosks, cellphones, and controllers for the home. From industrial equipment to medical error. Even the Olympics is not immune: in the 2010 winter Olympics, when a luge went off the track, killing the racers, the Olympic committee announced that it was due to human error, ignoring all the evidence that pointed to poor design of the track. When in doubt, blame the victim.
The invisible computer is here. We use the large desktop machines less and less, instead resorting to portable laptops, Netbooks, pads, and smart phones. More and more we simply use specialized devices, where the computer is invisibly embedded. Powerful yet inexpensive sensors allow us to interact through body motion, position, and gestures. Soon, many of the scenarios from movies and science fiction novels will be reality, both the exciting positive ones and the frightening, nasty ones.
How will people cope with the wild array of sensors, processors, and displays, where active interacting displays will be ubiquitous on walls, ceilings and floors, on panels and clothing, on displays and advertisements? These devices will recognize us, follow us, and both control and be controlled by us. The design issues here are enormous. New modes of operation coupled with new forms of input and output will require new ways of providing the basics of interaction design: visibility, discoverability, consistency, feedback, and appropriate conceptual models.
As Jakob Nielsen and I remarked in a recent interactions piece, new technologies bring about new modes of behavior, often meaning that each time we take two steps forward, we also go backward. We used the example of gesture and multi-touch interfaces, but the argument applies to any change in technology, method, or task. In these cases, we gained flexibility, power, and enjoyment, but lost in discoverability, consistency, and control. The study of human interaction will long be an essential component of all technological systems: our field will long be needed.
Don Norman wears many hats, including co-founder of the Nielsen Norman group, Visiting Professor at KAIST (South Korea), and author, his latest book being Living with Complexity. He lives at jnd.org.
Column written for Interactions. © ACM. This is the author's version of the work. It is posted here by permission of ACM for your personal use. It may be redistributed for non-commercial use only, provided this paragraph is included. The definitive version will be published in Interactions.