Essays & Articles

presentational-underline

Being Analog 2 of 3

TREATING PEOPLE LIKE MACHINES

What an exciting time the turn of the century must have been! The period from the late 1800s through the early 1900s was one of rapid change, in many ways paralleling the changes that are taking place now. In a relatively short period of time, the entire world went through rapid, almost miraculous technological invention, forever changing the lives of its citizens, society, business and government. In this period, the light bulb was developed and electric power plants sprung up across the nation. Electric motors were developed to power factories. The telegraph spanned the American continent and the world, followed by the telephone. Then came the phonograph, for the first time in history allowing voices, songs, and sounds to be preserved and replayed at will. At the same time, mechanical devices were increasing in power. The railroad was rapidly expanding its coverage. Steam-powered ocean-going ships were under development. The automobile was invented, first as expensive, hand-made machines, starting with Daimler and Benz in Europe. Henry Ford developed the first assembly line for the mass-production of relatively inexpensive automobiles. The first airplane was flown and within a few decades would carry mail, passengers, and bombs. Photography was in its prime and motion pictures were on the way. And soon to come was radio, allowing signals, sounds, and images to be transmitted all across the world, without the need for wires. It was a remarkable period of change.

It is difficult today to imagine life prior to these times. At nighttime the only lighting was through flames: candles, fireplaces, oil and kerosene lamps, and in some places, gas. Letters were the primary means of communication, and although letter delivery within a large city was rapid and efficient, with delivery offered more than once each day, delivery across distances could take days or even weeks. Travel was difficult, and many people never ventured more than 30 miles from their homes during their entire lives. Everyday life was quite different from today. But in what to a historian is a relatively short period, the world changed dramatically in ways that affected everyone, not just the rich and upper class, but the everyday person as well.

Light, travel, entertainment: all changed through human inventions. Work did too, although not always in beneficial ways. The factory already existed, but the new technologies and processes brought forth new requirements, along with opportunities for exploitation. The electric motor allowed a more efficient means of running factories. But as usual, the largest change was social and organizational: the analysis of work into a series of small actions and the belief that if each action could be standardized, each organized into “the one best way, ” then automated factories could reap even greater efficiencies and productivity. Hence the advent of time-and-motion studies, of “scientific management,” and of the assembly line. Hence too came the dehumanization of the worker, for now the worker was essentially just another machine in the factory, analyzed like one, treated like one, and asked not to think on the job, for thinking slowed down the action.

The era of mass production and the assembly line, resulted in part from the efficiencies of the “disassembly line” developed by the meat packing factories. The tools of scientific management took into account the mechanical properties of the body but not the mental and psychological ones. The result was to cram ever more motions into the working day, treating the factory worker as a cog in a machine, deliberately depriving work of all meaning, all in the name of efficiency. These beliefs have stuck with us, and although today we do not go to quite the extremes advocated by the early practitioners of scientific management, the die was cast for the mindset of ever-increasing efficiency, ever-increasing productivity from the workforce. The principle of improved efficiency is hard to disagree with. The question is, at what price?

Frederick Taylor thought there was “the one best way” of doing things. Taylor’s work, some people believe, has had the largest impact upon the lives of people in this century than that of anyone else. His book, The principles of scientific management, published in 1911, guided factory development and workforce habits across the world, from the United States to Stalin’s attempt to devise an efficient communist workplace in the newly formed Soviet Union. You may not have ever heard of him, but he is primarily responsible for our notions of efficiency, of the work practices followed in industry across the world, and even of the sense of guilt we sometimes feel when we have been “goofing off,” spending time on some idle pursuit when we should be attending to business.

Taylor’s “scientific management” was a detailed, careful study of jobs, breaking down each task into its basic components. Once you knew the components, you could devise the most efficient way of doing things, devise procedures that enhanced performance and increased the efficiency of workers. If Taylor’s methods were followed properly, management could raise workers’ pay while at the same time increasing company profit. In fact, Taylor’s methods required management to raise the pay, for money was used as the powerful incentive to get the workers to follow the procedures and work more efficiently. According to Taylor, everybody would win: the workers would get more money, the management more production and more profit. Sounds wonderful, doesn’t it? The only problem was that workers hated it.

Taylor, you see, thought of people as simple, mechanical machines. Find the best way to do things, and have people do it, hour after hour, day after day. Efficiency required no deviation. Thought was eliminated. First of all, said Taylor, the sort of people who could shovel dirt, do simple cutting, lathing, and drilling, and in general do the lowest-level of tasks, were not capable of thought. “Brute laborers” is how he regarded them. Second, if thought was needed, it meant that there was some lack of clarity in the procedures or the process, which signaled that the procedures were wrong. The problem with thinking, explained Taylor, was not only that most workers were incapable of it, but that thinking slowed the work down. That’s certainly true: why, if we never had to think, just imagine how much faster we could work. In order to eliminate the need for thought, Taylor stated that it was necessary to reduce all work to the routine, that is all the work except for people like him who didn’t have to keep fixed hours, who didn’t have to follow procedures, who were paid literally hundreds of times greater wages than the brutes, and who were allowed &emdash; encouraged &emdash; to think .

Taylor thought that the world itself was neat and tidy. If only everyone would do things according to procedure, everything would run smoothly, producing a clean, harmonious world. Taylor may have thought he understood machines, but he certainly didn’t understand people. In fact, he didn’t really understand the complexity of machines and the complexity of work. And he certainly didn’t understand the complexity of the world.

The World Is Not Neat and Tidy

The world is not neat and tidy. Things not only don’t always work as planned, but the notion of “plan” itself is suspect. Organizations spend a lot of time planning their future activities, but although the act of doing the planning is useful, the actual plans themselves are often obsolete even before their final printing.

There are lots of reasons for this. Those philosophically inclined can talk about the fundamental nature of quantum uncertainty, of the fundamental statistical nature of matter. Alternatively, one can talk of complexity theory and chaos theory, where tiny perturbations can have major, unexpected results at some future point. I prefer to think of the difficulties as consequences of the complex interactions that take place among the trillions of events and objects in the world, so many interactions that even if science were advanced enough to understand each individual one — which it isn’t — there are simply too many combinations and perturbations possible to ever have worked out all possibilities. All of these different views are quite compatible with one another.

Consider these examples of things that can go wrong:

  • A repair crew disconnects a pump from service in a nuclear power plant, carefully placing tags on the controls so that the operators will know that this particular unit is temporarily out of service. Later a minor incident occurs, and as the operators attempt to deal with it, they initially diagnose it in a reasonable, but erroneous way. Eventually, the problem becomes so serious that the entire plant is destroyed: the worst accident in the history of American nuclear power. Among the factors hindering their correct recognition of the situation is that the tags so carefully placed to indicate the out-of-service unit hang over another set of indicators, blocking them from view. Could this have been predicted beforehand? Maybe. But it wasn’t.
  • A hospital x-ray technician enters a dosage for an x-ray machine, then realizes the machine is in the wrong mode and corrects the setting. However, the machine’s computer program wasn’t designed to handle a rapidly made correction, so it did not properly register the new value. Instead, it delivered a massive overdose to the patient. Sometime later, the patient died of the overdose. The accident goes undiagnosed, because as far as anyone can determine, the machine had done the correct thing. Moreover, the effect of overdosage doesn’t show up immediately, so when the symptoms were reported, they were not correlated with the incident, or for that matter, with the machine. When the machine’s performance first comes under suspicion, the company who manufactured it explains in detail why such an accident is impossible. The situation repeats itself in several different hospitals, killing a number of patients before a sufficient pattern emerges that the problem is recognized and the design of the machine is fixed. Could this have been predicted beforehand? Maybe. But it wasn’t.
  • The French air traffic controllers seem to be forever complaining, frequently calling strikes and protests. American air traffic controllers aren’t all that happy either. And guess what the most effective protest method is? Insisting on following procedures. On normal days, if the workers follow the procedures precisely, work slows up, and in the case of air traffic control, airline traffic across the entire world is interfered with. The procedures must be violated to allow the traffic to flow smoothly. Of course, if there is an accident and the workers are found not to have followed procedures, they are blamed and punished.
  • The United States Navy has a formal, rigid hierarchy of command and control, with two classes of workers — enlisted crew and officers — and a rigid layer of formal rank and assignment. There are extensive procedures for all tasks. Yet in their work habits, especially in critical operations, rank seems to be ignored and crew members frequently question the actions. Sometimes they even debate the appropriate action to be taken. The crew, moreover, is always changing. There are always new people who have not learned the ship’s procedures, and even the veterans often don’t have more than two or three year’s experience with the ship: the Navy has a policy of rotating assignment. Sounds horrible, doesn’t it? Isn’t the military supposed to be the model of order and structure? But wait. Look at the outcomes: the crew functions safely and expertly in dangerous, high-stress conditions. What is happening here?

These examples illustrate several points. The world is extremely complex, too complex to keep track of, let alone predict. In retrospect, looking back after an accident, it always seems obvious. There are always a few simple actions that, had they been taken, would have prevented an accident. There are always precursor events that, had they been perceived and interpreted properly would have warned of the coming accident. Sure, but this is in retrospect, when we know how things turned out.

Remember, life is complex. Lots of stuff is always happening, most of which is irrelevant to the task at hand. We all know that it is important to ignore the irrelevant and attend to the relevant. But how does one know which is which? Ah.

We human beings are a complex mixture of motives and mechanisms. We are sense-making animals, always trying to understand and give explanations for the things we encounter. We are social animals, seeking company, working well in small groups. Sometimes this is for emotional support, sometimes for assistance, sometimes for selfish reasons, so we have someone to feel superior to, to show off to, to tell our problems to. We are narcissistic and hedonistic, but also altruistic. We are lots of things, sometimes competing, conflicting things. And we are also animals, with complex biological drives that strongly affect behavior: emotional drives, sexual drives, hunger drives. Strong fears, strong desires, strong phobias, and strong attractions.

Making Sense of the World

If an airplane crashes on the border between the United States and Canada, killing half the passengers, in which country should the survivors be buried?

We are social creatures, understanding creatures. We try to make sense of the world. We assume that information is sensible, and we do the best we can with what we receive. This is a virtue. It makes us successful communicators, efficient and robust in going about our daily activities. It also means we can readily be tricked. It wasn’t Moses who brought the animals aboard the ark, it was Noah. It isn’t the survivors who should be buried, it is the casualties.

It’s a good thing we are built this way: this compliance saves us whenever the world goes awry. By making sense of the environment, by making sense of the events we encounter, we know what to attend to, what to ignore. Human attention is the limiting factor, a well known truism of psychology and of critical importance today. Human sensory systems are bombarded with far more information than can be processed in depth: some selection has to be made. Just how this selection is done has been the target of prolonged investigation by numerous cognitive scientists who have studied people’s behavior when overloaded with information, by neuroscientists who have tried to follow the biological processing of sensory signals, and by a host of other investigators. I was one of them: I spent almost ten years of my research life studying the mechanisms of human attention.

One understanding of the cognitive process of attention comes from the concept of a “conceptual model,” a concept that will gain great importance in Chapter 8 when I discuss how to design technology that people can use. A conceptual model is, to put it most simply, a story that makes sense of the situation.

I sit at my desk with a large number of sounds impinging upon me. It is an easy matter to classify the sounds. What is all that noise outside? A family must be riding their bicycles and the parents are yelling to their children. And the neighbor’s dogs are barking at them, which is why my dogs started barking. Do I really know this? No. I didn’t even bother to look out the window: my mind subconsciously, automatically created the story, creating a comprehensive explanation for the noises, even as I concentrated upon the computer screen.

How do I know what really happened? I don’t. I listened to the sounds and created an explanation, one that was logical, heavily dependent upon past experience with those sound patterns. It is very likely to be correct, but I don’t really know.

Note that the explanation also told me which sounds went together. I associated the barking dogs with the family of bicyclists. Maybe the dogs were barking at something else. Maybe. The point is not that I might be wrong, the point is that this is normal human behavior. Moreover, it is human behavior that stands us in good stead. I am quite confident that my original interpretations were correct, confident enough that I won’t bother to check. I could be wrong.

A good conceptual model of events allows us to classify them into those relevant and those not relevant, dramatically simplifying life: we attend to the relevant and only monitor the irrelevant. Mind you, this monitoring and classification is completely subconscious. The conscious mind is usually unaware of the process. Indeed, the whole point is to reserve the conscious mind for the critical events of the task being attended to and to suppress most of the other, non-relevant events from taking up the limited attentional resources of consciousness.

On the whole, human consciousness avoids paying attention to the routine. Conscious processing attends to the non-routine, to the discrepancies and novelties, to things that go wrong. As a result, we are sensitive to changes in the environment, remarkably insensitive to the commonplace, the routine.

Consider how conceptual models play a role in complex behavior, say the behavior of a nuclear power plant, with many systems interacting. A nuclear power plant is large and complex, so it is no surprise that things are always breaking. Minor things. I like to think of this as analogous to my home. In my house, things seem forever to be breaking, and my home is nowhere near as complex as a power station. Light bulbs are continually burning out, several door hinges and motors need oiling, the bathroom faucet leaks, and the fan for the furnace is making strange noises. Similar breakdowns happen in the nuclear power plant, and although there are repair crews constantly attending to them, the people in the control room have to decide which of the events are important, which are just the everyday background noise that have no particular significance.

Most of the time people do brilliantly. People are very good at predicting things before they happen. Experts are particularly good at this because of their rich prior experience. When a particular set of events occurs, they know exactly what will follow.

But what happens when the unexpected happens? Do we go blindly down the path of the most likely interpretation? Of course, in fact this is the recommended strategy. Most of the time we behave not only correctly, but cleverly, anticipating events before they happen. You seldom hear about those instances. We get the headlines when things go wrong, not when they go right.

Look back at the incidents I described earlier. The nuclear power incident is the famous Three Mile Island event that completely destroyed the power-generating unit and caused such a public loss in confidence in nuclear power that no American plant has been built since. The operators misdiagnosed the situation, leading to a major calamity. But the misdiagnosis was a perfectly reasonable one. As a result, they concentrated on items they thought relevant to their diagnosis and missed other cues, which they thought were just part of the normal background noise. The tags that blocked the view would not normally have been important.

In the hospital x-ray situation, the real error was in the design of the software system, but even here, the programmer erred in not thinking through all of the myriad possible sequences of operation, something not easy to do. There are better ways of developing software that would have made it more likely to have caught these problems before the system was released to hospitals, but even then, there are no guarantees. As for the hospital personnel who failed to understand the relationship, well, they too were doing the best they could to interpret the events and to get through their crowded, hectic days. They interpreted things according to normal events, which was wrong only because this one was very abnormal.

Do we punish people for failure to follow procedures? This is what Frederick Taylor would have recommended. After all, management determines the one best way to do things, writes a detailed procedure to be followed in every situation, and expects workers to follow them. That’s how we get maximum efficiency. But how is it possible to write a procedure for absolutely every possible situation, especially in a world filled with unexpected events? Answer: it’s not possible. This doesn’t stop people from trying. Procedures and rule books dominate industry. The rule books take up huge amounts of shelf space. In some industries, it is impossible for any individual to know all the rules. The situation is made even worse by national legislatures who can’t resist adding new rules. Was there a major calamity? Pass a law prohibiting some behavior, or requiring some other behavior. Of course, the law strikes at the easiest source to blame, whereas the situation may have been so complex that no single factor was to blame. Nonetheless, the law sits there, further controlling sense and reasonableness in the conduct of business.

Do we need procedures? Of course. The best procedures will mandate outcomes, not methods. Methods change: it is the outcomes we care about. Procedures must be designed with care and attention to the social, human side of the operation. Else we have the existing condition in most industries. If the procedures are followed exactly, work slows to an unacceptable level. In order to perform properly it is necessary to violate the procedures. Workers get fired for lack of efficiency, which means they are subtly, unofficially encouraged to violate the procedures. Unless something goes wrong, in which case they can be fired for failure to follow the procedures.

Now look at the Navy. The apparent chaos, indecision and arguments are not what they seem to be. The apparent chaos is a carefully honed system, tested and evolved over generations, that maximizes safety and efficiency even in the face of numerous unknowns, novel circumstances, and a wide range of skills and knowledge by the crew. Having everyone participate and question the actions serves several roles simultaneously. The very ambiguity, the continual questioning and debate keeps everyone in touch with the activity, thereby providing redundant checks on the actions. This adds to the safety, for now it is likely for errors to get detected before they have caused problems. The newer crew members are learning, and the public discussions among the other crew serve as valuable training exercises, training mind you not in some artificial, abstract fashion, but in real, relevant situations where it really matters. And by not punishing people when they speak out, question, or even bring the operations to a halt, they encourage continual learning and performance enhancement. It makes for an effective, well-tuned team.

New crew members don’t have the experience of older ones. This means they are not efficient, don’t always know what to do, and perform slowly. They need a lot of guidance. The system automatically provides this constant supervision and coaching, allowing people to learn on the job. At the same time, because the minds of the new crew members are not yet locked into the routines, their questioning can sometimes reveal errors: they challenge the conventional mindset, asking whether the simple explanation of events is correct. This is the best way to avoid errors of misdiagnosis.

The continual challenge to authority goes against conventional wisdom and is certainly a violation of the traditional hierarchical management style. But it is so important to safety that the aviation industry now has special training in crew management, where the junior officers in the cockpit are encouraged to question the actions of the captain. In turn, the captain, who used to be thought of as the person in command, with full authority, never to be questioned, has had to learn to encourage crew members to question their every act. The end result may look less regular, but it is far safer.

The Navy’s way of working is the safest, most sensible procedure. Accidents are minimized. The system is extremely safe. Despite the fact that the Navy is undertaking dangerous operations under periods of rushed pace and high stress, there are remarkably few mishaps. If the Navy would follow formal procedures and a strict hierarchy of rank, the result would very likely be an increase in accident rate . Other industries would do well to copy this behavior. Fred Taylor would turn over in his grave. (But he would be efficient about it, without any wasted motion.)

Human Error

Machines, including computers, don’t err in the sense that they are fully deterministic, always returning the same value for the same inputs and operations. Someday we may have stochastic and/or quantum computation, but even then, we will expect them to follow precise laws of operation. When computers do err, it is either because a part has failed or because of human error, either in design specification, programming, or faulty construction. People are not fully deterministic: ask a person to repeat an operation, and the repetition is subject to numerous variations.

People do err, but primarily because they are asked to perform unnatural acts: to do detailed arithmetic calculations, to remember details of some lengthy sequence or statement, or to perform precise repetitions of actions. In the natural world, no such acts would be required: all are a result of the artificial nature of manufactured and invented artifacts. Perhaps the best example of the arbitrary and inelegant fit of human cognition to artificial demands contrasted with a natural fit to natural demands is to contrast people’s ability to communicate with programming languages versus human language.

Programming languages are difficult to learn, and a large proportion of the population is incapable of learning them. Moreover, even the most skilled programmers make errors, and error finding and correction occupy a significant amount of a programming team’s time and effort. Moreover, programming errors are serious. In the best circumstances, they lead to inoperable systems. In the worst, they lead to systems that appear to work but produce erroneous results.

A person’s first human language is so natural to learn that it is done without any formal instruction: people must suffer severe brain impairment to be incapable of learning language. Note that “natural” does not mean “easy”: it takes ten to fifteen years to master one’s native language. Second language learning can be excruciatingly difficult.

Natural language, unlike programming language, is flexible, ambiguous, and heavily dependent on shared understanding, a shared knowledge base, and shared cultural experiences. Errors in speech are seldom important: Utterances can be interrupted, restarted, even contradicted, with little difficulty in understanding. The system makes natural language communication extremely robust.

Human error matters primarily because we followed a technology-centered approach in which it matters. A human-centered approach would make the technology robust, compliant, and flexible. The technology should conform to the people, not people to the technology.

Today, when faced with human error, the traditional response is to blame the human and institute a new training procedure: blame and train. But when the vast majority of industrial accidents are attributed to human error, it indicates that something is wrong with the system, not the people. Consider how we would approach an system failure due to a noisy environment: we wouldn’t blame the noise, we would instead design a system that was robust in the face of noise.

This is exactly the approach that should be taken in response to human error: redesign the system to fit the people who must use it. This means to avoid the incompatibilities between human and machine that generate error, to make it so that errors can be rapidly detected and corrected, and to be tolerant of error. To “blame and train” does not solve the problem.

Continued…