A graduate student of mine, worrying about how to teach the principles of good design to undergraduates, suggested that we should use writing as an example. "We should teach them," he said, "to think of the problem of designing something that people will find understandable and easy to use as the same problem as writing something that other people will understand and find easy to read."
It's a wonderful idea, but it would fail: most people can't write well. In fact, I often use the idea in the reverse direction: In my attempts to teach students how to write, I use good design as an example. Think of the problem of writing something that other people will understand and find easy to read as the same problem of designing something that people will find understandable and easy to use.
Writing is like design: design is like writing. Although it is useful to try to teach one based upon the properties and needs of the other, either attempt is apt to fail because people tend to be bad at both. To be successful at either task it is important to be able to take the other person's point of view, to understand their background and interests, and to make the work fit the powers and limitations of human cognition. A good designer and a good writer have to share certain characteristics, among the most important being "empathy."
Empathy: "understanding, being aware of, being sensitive to, and vicariously experiencing the feelings, thoughts and experience of another."
But empathy isn't enough. It is easy to be fooled, easy for someone to think they understand their users or readers when in fact, they base their notions solely on their own knowledge and experience. Nothing can be worse than a writer or designer who thinks everyone else is just like them (unless it is the designer or writer who thinks that they are unique and special and nobody else is like them). To be successful, both writing and design have to follow basic psychological principles. And then they must be tested, tried out with readers or users who are similar to the intended audience, and then revised in whatever manner the test results indicate. All this takes a lot of effort and time. Time to learn the principles and appropriate techniques, time to practice them, time to test one's writing or design, time to revise, retest, and re-revise. Few are willing to expend that much time or effort.
It is amazing how resistant many people are to the requirements for good design or good writing. The problem is more obvious in writing because so many more people write than design.
One of the things that stands out when talking to designers and long-term users of poorly designed systems is that these people take great pride in their skills. They had to go through great difficulties to master the system, and they are rightfully proud of having done so. That, by itself, is alright. The problem is that the difficulties become a test of the person or group. Then, rather than ease the situation for the next people, it is used as a sort of initiation rite. The hardy survivors of the experience claim to share a common bond and look with disdain upon those who have not been through the same rites. They share horror stories with one another.
This carries over to much of everyday life. Is the new computer system in the office difficult to learn and to use? Imagine trying to complain to the people who have already mastered the system.
"Tough," they will say (or maybe just think to themselves). "It is supposed to be difficult, it is how we separate those with ability from those without. Besides, all of us had to spend a lot of time learning it, losing a lot of work along the way, so why shouldn't you?
People who use computer systems, or complex office machinery (such as copiers and the modern office telephone) will recognize the "initiation rites" syndrome. Those who have mastered the systems feel a great sense of superiority over those who have not. And those who have not, in turn, feel inferior, incompetent, and powerless.
My complaints about computer systems have often encountered this attitude. In the world of computers, there is a software system called "Unix" that many people believe will become popular in the office place. (If you don't know anything about Unix, don't worry, you don't need to. And if you are very lucky, you will never need to.)
Many years ago I wrote an article about the evils of Unix, explaining that it might be a very fine piece of computer science, but as a system intended to be used by ordinary people, it was a disaster. "My secretaries persist only because I insist," I said, pointing out the obscure commands and lack of standards for terminology and procedures, to say nothing of the ways by which Unix could destroy months of work through a moment's mishap. The points I made then are no longer controversial, but at the time, why they were heretic. I was attacked by hundreds of professional programmers across the country. If I didn't approve of Unix, they told me, I had no business using it. Besides, who was I anyway to criticize computer software? In other words, you weren't allowed to criticize unless you were a professional. Being a mere user of the stuff didn't qualify. To my dismay, I had to prove my credentials as a competent scientist and programmer before they would listen further, not that this had anything to do with the merits of the ease-of-use of Unix.
There is an old English folk saying that goes, "if you can't stand the heat, get out of the kitchen." I have a different approach: let's try to do something about the heat. The folk saying would have us accept the poor designs of the world. Why? After all, if people were responsible for the "heat" in the first place, then people should be able to do something about it. Is the kitchen too hot? Why not redesign it so that it is comfortable?
If It's Easy to Understand, Then It Can't Be Very Profound
More than 20 years ago, in an interview, Vonnegut said: "We must acknowledge that the reader is doing something quite difficult for him, and the reason you don't change point of view too often is so he won't get lost, and the reason you paragraph often is so that his eyes won't get tired, so you get him without him knowing it by making his job easy for him." I especially love the "get him without him knowing it" part, but Vonnegut has been almost too successful at that. Among his more stupid readers are those critics who can't tell the difference between easy reading and easy writing; because his books are so easy to read, Vonnegut is accused of "easy" (or lazy) writing. I think you have to be a writer yourself to know how hard it is to make something easy to read -- or else you just have to be a little smart. (Chapter Note: From a commentary by American novelist John Irving on fellow novelist Kurt Vonnegut. (Irving, 1990).)
Obscurity is often thought of as an essential ingredient in academic writing. Someone who writes clearly is viewed with great suspicion. Basically, the idea appears to be that if the writing is easy to understand, then the ideas beneath them must be inferior: simple writing reflects a simple mind.
In the field of literary criticism, deconstructionism rages among academics in Europe and the United States. What is the deconstructionist approach to literature? The name itself helps illustrate the approach: "deconstructionism" -- an unintelligible, artificially created, word. Can you find out what deconstruction might be by reading any of the writings in the field? Barely. One authority has called the writing "obscurantist terrorism." Another explained it this way: "The idea is that the style is so obscure that it's hard to know what the author is trying to say, and this allows the savant to heap contempt on his critics by saying they have failed to understand him." "the theorist's gaudy patter confirms his or her expertise by reinforcing the lay person's sense of ignorance. Professionalism encourages obscurantism: it is easier to justify a steep medical bill when the diagnosis is vasometer rhinitis and not the common cold."
Chapter Note: "obscurantist terrorism." From Michel Foucault, quoted in Lehman, (1991), p. 77. The other two quotations in that same paragraph are from Lehman's book (Lehman, 1991).
But These Are Complicated Topics
One standard excuse of obscurantist authors is that the material in question is complex and technical, sometimes very abstract and refined. The fact that the writing is difficult to follow is unavoidable. The argument then gets turned around: the inability of readers like me to follow such complex thoughts reflects upon me, the reader, not upon the writing. If I really cared, I would do the work required to understand. And if I still can't, well, I should just face up the fact that my mind isn't sufficiently powerful. Complex ideas require complex writing, and then complex, powerful minds to deal with them. Simple writing is for simple ideas, simple minds.
Is there any case to be made for this? It sounds to me suspiciously like those folks who told me that if I made errors using the Unix computer system, why then I had no business using it. Clearly those who are incompetent to use something or to understand a text have no business trying to do so. Isn't this a great defense? You can cover up any kind of inelegant design or writing this way. Wonderful.
Sure, some thoughts and ideas are complex, but the real test of the power of the idea — and of the thinker — is the ability to translate it into terms that the rest of us can understand. In fact, as long as the ideas and their expression remain convoluted and complex, there is a good chance that they are wrong, even fuzzy-headed. The complexity of the writing masks both the idea and its falsity. In my own case, I once was told that a colleague working on a paper with me had said "The trouble with Norman's writing is that it's so clear that it's easy to tell when the ideas are wrong."
The implication, of course, is that the correct way to deal with a bad idea is to hide it, disguise it ? bury it in indecipherable writing. After all, if an idea is wrong there are only two things one can do: fix it or disguise it. The first is very hard, sometimes impossible. The second is easy. (Actually, there is a third thing, but this is hardest of all: admit the idea is wrong.)
Now I have to admit that I am treading on dangerous ground here. If all I have is a simple mind, of course I will defend simple writing. After all, if my mind isn't powerful enough to understand the subtleties and complexities of really difficult arguments, then I will want to complain that the difficulty is with the arguments, not with my mind. All my complaints about the difficulty of using so-called "badly designed" things or of understanding so-called "badly written" writing might really be reflections on me, not on the material. Aren't my complaints self-serving?
The issue of deciding who is right, the designer/writer or the user/reader reminds me of other scientific quarrels. Consider the effect of age on intellectual ability. Young scientists find that human intellectual abilities decline with age, starting in the late twenties or thirties. However, older scientists doing the same kind of research show that intellectual ability does not decline with age, at least not after you exclude the effects of illness, "cohort" effects, and other technical complexities. Sure, physical and sensory abilities decline — physical strength, vision, hearing, speed of response — but certainly not intellectual ability. In fact, older scientists point out, knowledge, or what is called "crystal intelligence," is not only unimpaired, it improves with age.
I am similarly reminded of all the studies by middle-aged, white, male professors, administrators, and researchers who show that there is no discrimination against people based on age, sex, or color. I am sure you can think of other examples.
Obviously I am not trying to make the point that we are all biased or prejudiced by our ages, professions, social status, or whatever. In fact, that argues against my point: it implies that my argument in favor of clarity of writing probably simply reveals my biases and my own limitations. Nonetheless, I press on with my ways.
Who Should Do the Work, Writer or Reader?
"(Hand written) Manuscript culture is producer-oriented, since every individual copy of a work represented great expenditure of an individual copyist's time. Medieval manuscripts are turgid with abbreviations, which favor the copyist although they inconvenience the reader. Print is consumer-oriented, since the individual copies of a work represent a much smaller investment of time: a few hours spent in producing a more readable text will immediately improve thousands upon thousands of copies."(From Ong's (1982) study of the impact of "Orality and Literacy," pp. 122-3.
The importance of good writing has changed over time. In the early days of writing, the technology of writing was not easy to master, and there were not very many readers.
Imagine how many revisions you would make if the writing was done by chiseling each word into rock. There is no evidence that any serious writing was ever done in quite this fashion, but in the first centuries of writing, the task was not easy. Early writing surfaces were not easy to use: papyrus, leather hides, cloth. Early writing instruments were clumsy and had to be continually tinkered with. Each copy of a book had to be made by hand. No wonder that the author and copyist both did whatever made their task easier, as the opening quotation of this section points out. Who cared about the reader — it was the writer or copyist that mattered. Early books were often designed to look good, as opposed to being readable. Words were broken wherever convenient, the style of typography changed willy-nilly. In one book title page, the biggest word is the first one, "the": obviously its impressive size was not because of its importance, it was because it looked better that way.
The first books were designed to simplify the task of the writer or copier, and to make it look pretty to the viewer. The reader was not important. Much reading was done aloud, anyway, so that only one person had to struggle with the text, all the rest only had to listen.
The same points hold today, even though the technology of writing has advanced to the point where the hard work is the mental creation and refinement of one's thoughts into a form the reader can understand instead of the physical act of writing. It is easier for writers to let all their thoughts spill out on the page as they occur to them than to do the hard, time-consuming work to make those thoughts clear and easy for readers.
Conscientious authors find they must spend considerable time writing and rewriting. An article must be written and rewritten, passed around to students and colleagues, and then revised yet again. It might take months to get a paper in shape for submission to a journal. I once had a friend, a well respected, young scientist making his way up the academic ranks, who felt he didn't have such time to waste. He had a plan to get promoted, and this required a steady stream of academic papers. He would write a paper once, and that was it. Write it, submit it to the journal, and assuming it got accepted for publication (which it almost always did), rewrite it once more, but only sufficiently to take care of whatever scientific questions or reservations the editor had forwarded. Then he would go on to the next paper (which in fact he had already started even before he had mailed off the previous one).
This scheme worked because scientific journals seldom examine writing style: the editors and reviewers read only for scientific content, looking carefully to see if there are flaws in the arguments, if previous work was properly discussed, if the experiments were well conceived and without obvious artifacts that might have affected the results, and if the interpretation follows from the data. Scientists are not known for their writing ability, and so the editors and reviewers themselves might not even have known why some papers were easier to read than others.
When I complained to my friend that his papers were too difficult to understand, his talks were packed with too much material for the audience to follow, his response was the classic one of the author-in-a-hurry: "My time is too valuable to spend polishing my papers or talk. It is more important to get on to the next set of ideas. If people want to know what I said or did, everything they need is there: they just may have to work hard, that's all."
The flaw in the argument, of course, is that why should anyone bother? How will they even know that something worthwhile is in there if it cannot be understood? Sure the work got published, and sure, my friend got promoted, but did anyone read the papers? Did his work have any impact? That is where readability matters.
There is an interesting tradeoff between the work required to write well and to read well. The harder the author works, the easier for the reader. Hasty, inconsiderate authors create hours of effort for the reader. Careful, conscientious writers simplify the task for readers, but at the cost of great time and effort for themselves. Whose time is to be worth more: one writer or many readers?
Another way of looking at the issue is to ask how many people are involved on each side of the work. A writer works alone, although actual publication of an article or book will eventually involve a dozen or so people who read the material, approve it, edit it, and often try to patch up the incomprehensible parts. But the potential audience numbers in the thousands, and for really popular works, millions. If one person's work is to be read by that many people, then it is worthwhile to spend the extra time to help them.
Now, in some sense, my academic colleague may have been right about the worth of his efforts: the readership of his articles numbered in the hundreds, and perhaps for such a relatively small number, great effort by him was not required. Then again, maybe his readership was only in the hundreds because of his lack of effort.
Did you know that many scientific articles are probably never read? Whether or not a particular article in a scientific journal is actually read is difficult to tell, but we can determine how many other authors refer to it, because scientific writing requires that one give proper credit to ideas, even opposing ones that argue just the opposite of the point you are trying to make. In fact, opposing ideas are considered very important, because each scientific paper must carefully listen to the opposing voices and try to explain why they are either mistaken or why they perhaps do not apply in this particular case. (Or at least, each author has to pretend to pay attention to the opposition, even if privately they scorn and distrust them.)
Because citation of other papers is so important in scientific writing, we can get a sense of how much impact a paper has had by seeing how many other papers refer to it. Most scientific libraries subscribe to an important reference service called the "Citation Index" that lets you look up a paper and see who else has referred to it. This is a valuable way of learning about research in a scientific specialty: find an early research paper that is known to be important and then look for other papers that refer to it. This lets you follow the path of further work, up to the current date.
The citation index also turns out to be a good way of studying the history, philosophy, and sociology of science itself, allowing the researcher to follow the trail of citations and examine the complex network of interactions and citations. Sometimes scientists look up their own papers in the citation index just to see who has referred to them. It is through studies of the citation index that we know that interesting statistic: an amazingly large number of scientific papers never receive even a single citation.
But citations are a funny business. Some papers are cited a lot primarily because they are cited a lot. Positive feedback: if a paper is cited once, someone else might thereby hear about it and then refer to it. As more and more people refer to it, the paper becomes known as a standard citation. Others see it mentioned a lot, and then when they want to show that they are scholarly and know what has happened in the past, well, they refer to it also. Even if they themselves have never read it.
Why does this happen? It's common practice in science (and maybe in much of life) to defend one's opinions by reference to authority (see Latour, 1987). In fact, I just did it. I made the audacious claim that scientists often back up their claims by referring to some authoritative person who made the same claim rather than by making a logical argument or by demonstrating its validity with experimental evidence. How did I defend that claim? By citing an authority (the French sociologist, Bruno Latour).
Chapter Note: Latour's book, Science in Action, really is an authoritative analysis of the way scientists work. I used it in this essay both as illustration and also to suggest that you might find it interesting as well.
Latour points out that although every field has a standard set of papers that are agreed to be authoritative, it is all somewhat of a game that scientists play. Just citing the papers is enough: there is no need to have read them. I have discovered that a number of my own articles are sometimes referred to in order to buttress some point the author is making even though my article has nothing to do with the point or, in some cases, argues just the opposite. It's ok though, it's all part of the game. And every so often there is a shift in the way scientists views the phenomena of the field, a new set of authorities appear and, oops, those old authorities are not so authoritative anymore.
For the Benefit of Others
What I find most peculiar about this business of writing and design is that these activities are presumably done for the benefit of others, so shouldn't the needs and abilities of those others be considered? A good writer and a good designer share many things in common. They need to understand the needs and abilities of their audience, and they must consider just how the product will be used.
If you are designing a sophisticated machine for the home, shouldn't you try it out on some home dwellers before manufacture? If you have written an instruction manual for some appliance, shouldn't you ask some prospective appliance owners to use the manual before you print it? Obviously the notion of trying out the material on the intended audience is not popular among designers and writers of instruction manuals, because if it were, we would not have so many unusable products, so many unintelligible instruction manuals.
Now most designers and writers dispute this point. After all, they are fond of pointing out, they are designing things for people, and since they are people themselves, they know just how people work and just what people need.
The argument doesn't work. First of all, designers and writers are not ordinary people: they are designers and writers. A designer of a kitchen appliance probably spends all day at work designing, not using the kitchen. The person who designs an adding machine or a carpentry tool may not be an accountant or a carpenter. And even if they were, skills at these tasks vary widely, and no single person can be expected to be aware of the wide range of needs of the prospective audience.
Finally, once a person has worked hard on a design or writing project, they then know too much about the material to be able to step back and look at it with a neutral eye. Unintelligible sentences seem perfectly reasonable. Even misspelled words or only half-finished sentences can be overlooked. Awkward or dangerous design features can be passed by if the designer is too familiar with the work. In order to evaluate something, you have to come at it fresh, with no expectations and biases. You have to be both a typical reader or user and also willing to speak of any difficulties you find. Writers need editors and test readers to serve this role. Designers need the equivalent.
FOOTNOTE1. Copyright © 1992 by Donald A. Norman. All rights reserved. Originally published by Addison Wesley. Now out of print. [Return to Text]