Essays & Articles

presentational-underline

DBW: Chapter 35. The Future of Technology


Science, engineering, technology: those who work in these fields never stop creating, dreaming, inventing. I know because I am one of them. I am always thinking of new ideas and new ways of doing things. When I was an engineer, I built things. When I was a psychologist and cognitive scientist, I did experiments, trying to uncover new findings, new interpretations, and new ways of thinking about the issues. When I was an industry executive and adviser to startup companies and then a designer, I thought of new methods, new tools, and new techniques. I’m still doing all of these things, except now most of my time is spent advising, developing new curricula for designers across the world, and writing about what I have learned—this book being an example.

Science and technology never stop, which means that our technology will always be changing. So whatever we say about technology today will be out of date tomorrow. (Except for doors, water faucets, and light switches: I predict that in 20-plus years we will still have trouble with all three.)

What new advances in science will eventually lead to new technologies? It takes a long time for principles discovered by science to find some use in a product, and then it takes many years for engineers and technologists to be able to produce a reliable, affordable, and useful product. Even after a new product is introduced, it can easily take another decade before it is accepted and absorbed into everyday life. These long lead times make it seem quite easy to predict what might come to market in the next decade or two: simply look into the research labs and see what they are working on. How do you do that? Read the journals, go to conferences, and peruse the patent applications.

A word of caution: most of the ideas in the research laboratories never become products. And even among those that do, most do not succeed in the marketplace. New companies, start-ups, mostly fail. This doesn’t mean that their product was not good. Failure can occur for many reasons.

I can predict with great assurance that all of the technological breakthroughs of the future will have first appeared in the research-and-development laboratories 10 to 20 years earlier. I also predict that people who tell us just which ones will succeed will be wrong most of the time. “Predicting the future is easy,” Herb Simon once told me. “People do it all the time. The hard part is getting it right.”

I am a technologist and a designer, so even though I know the difficulties, I cannot stop thinking about the future, not by predicting but by forecasting. What is the difference? A forecaster explores possibilities, sometimes by presenting multiple reasonable scenarios, even though all are different, sometimes (as in the case of weather forecasts) discussing the likelihood of an event happening. The goal is not to be accurate (because that is not possible) but instead to be ready, to be prepared. Technological breakthroughs are infrequent and often unexpected. Moreover, the time between a breakthrough and its appearance in the world as something useful, reliable, and affordable is measured in decades. The activities in research laboratories around the world are important sources of forecasts. Many of these great breakthroughs will never become practical enough for use, but some will. By preparing for all of them, though, we will be ready no matter what happens.
Here are some general forecasts about the future of technology. The advances in micro-and nanoelectronics will make things smaller and smaller, thus requiring less and less power while increasing speed and accuracy. This decrease in size enables advanced processing by small portable devices that will in turn enable many other new technologies to be affordable and useful. New sensors will enable the measurement of more and more physical and biological variables, including deductions from observations of human and animal behavior. Communication technologies will become so small and ubiquitous that, when coupled with small, powerful computational devices and a wide variety of sensors, almost everything of importance in the world will be monitored and observed. Many of these new devices will be mobile, even airborne, flying by the use of propellers as in today’s drones, or, for tiny objects, using a variety of biological mechanisms, especially flapping wings. The world of invisible and ubiquitous computing, discussed and predicted in the 1990s, will be with us even more than it is today. Advances in batteries and motors will enable many things to be motorized: baby carriages, shopping carts, as well as walkers and rollators for those who have difficulty with gait or balance. Actually, some of these devices may become unnecessary, as exoskeletons become smaller, more powerful, and intelligent. The modern automobile already has hundreds of motors, most of them electric. We already have motorized surfboards and skateboards, and if we can make these things, think of what we can do with almost anything that has moving parts.

Exoskeleton? What is that? It is an external structure—often called an external skeleton, hence “exoskeleton”—worn by a person to compensate for injury to the body to enable walking or the use of limbs, or as a device to allow people to carry heavy objects or exert far more strength than is possible by the unaided human. This is yet another technology that has been worked on for over a hundred years (a US patent was issued in 1890 for an “apparatus for facilitating walking”), and is the dream of many science fiction novels and movies. But if the problems of the excessive weight of the skeleton and of energy supplies can be overcome, exoskeletons will one day become common, everyday devices. The most common energy source is batteries, but the power demands are so high that the battery weight and restricted lifetime are severe impediments. Today, there are commercial units used in hospitals for rehabilitation and in some manufacturing facilities to allow people to manipulate heavy loads, and there are units in various testing stages for the military.

What about biological advances? They will be numerous: sensors for all sorts of ailments; genomic sequencing done inexpensively in a few hours; new biological markers and sensors; new smart systems for diagnosing, monitoring, relieving pain, and improving muscular control—all of which will not be cures but will nonetheless minimize the negative impacts of the underlying disease. Many of these devices will be used in the home or worn on or in the body without the necessity for supervision by medical assistants.

New understanding of biological and neurological processing will enable many advances. The use of biologically inspired products will increase. Advances in medicine will lead to better predictions and treatment of medical conditions, with home analysis and monitoring and individualized medicine appropriate to the needs of each patient.

There will be new sources of energy and increased efficiency in all energy-using devices. And there will be new ways of producing the technologies the world needs to continue to be sheltered and to have productive occupations, education, and lives—all without harm to the world’s ecosystems.
The concept of money may very well change. Consider cryptocurrency—a new and rather confusing idea. It is a digital form of money, but with a difference. Money itself is confusing, and very few people understand the concept. Before you insist that it is simple, try to answer why people assume that pieces of paper have any value. They have value mainly because of trust in the government, but what is that trust based on? So although cryptocurrency is confusing, it combines confusion about the nature of money with an entirely new way of generating, transferring, and spending money. Today, at least in the Global North, people have moved away from physical objects as money. I have traveled from the United States to Europe for week-long trips and never had to use paper or coin-based money. Much of the money in the world is digital, not physical, whether in the form of credit cards or wire transfers, and today can be directly and digitally transferred from person to person or from person to store by using a smart device such as a mobile phone rather than by taking it out of your wallet and handing it over: “mobile money” it is often called. Numerous low-income countries still use cash for most purposes, but the use of mobile money is increasing rapidly. Because the cellphone revolution has covered the world with simple, inexpensive phones, even in the poorest of communities, more and more people use the phone for such information as agricultural prices (thereby getting rid of exorbitant fees by the brokers) and digital transfer of funds. The International Monetary Fund reports that mobile-money accounts are widespread in many low- and middle-income countries. The Brookings Institution reports that digitizing cash delivery can address the UN Sustainable Development Goal of ending poverty. The future of money, it is becoming increasingly clear, is digital.

Today, all that digital currency still flows through the banking system, which includes governmental agencies. Does cryptocurrency change this process in a satisfactory, trustworthy way? Some people believe that cryptocurrency is still in its infancy, so what you hear about it today is not what it has the potential to do. Money is simply paper or digital figures in a database. Money works because people have trust in the government that has issued it, which means that the money of different countries has different values of trust. Many people who are deeply involved in cryptocurrency believe that it represents a fundamentally new perspective and view of currency.

Many of the advances in technology, biotech, sensors, and motors will be driven by artificial intelligence. How threatening is AI? The main threat is unintelligent use of technology—any technology. It’s like a gorilla. Gorillas in the wild are peaceful. They are vegetarians, so they spend much of the day chewing on plants, leaves, seeds, and fruits. If you approach them quietly, intelligently, your interaction with them will be an enjoyable experience. AI need not be a threat, but it has to be approached, designed, and implemented intelligently, with full understanding of the people with whom it will interact and with the aim to enhance their activities, not to replace them.

Unfortunately, technologists who design and release AI—and most new technologies—are proud of their expert technical skills but overlook ethical considerations such as enabling equity, eradicating bias, and ensuring that people are in control. They seldom will think of how to design the new technologies so that people are comfortable with the way they work and perform. All too often, these societal and social issues lie outside the development team’s technical expertise. This limitation is understandable, for these technologies require certain kinds of specialization, and human- and humanity-centered design principles are, like ethics and equity, quite a different specialty. This is why every team of technologists should have social scientists among them, people who do understand these principles and who will assist the technologists in addressing them. Quite often in the rush to deliver the new, amazing wonders of the world to the waiting public, technologists and designers push the social issues aside, giving AI the bad reputation it has today.

Automation will change how work is done, whether by AI, robots, or other technologies. This might be virtuous if it allows a change in the notion of work. As I discussed in chapter 33, what if we could move from the notion of work—an activity that many people consider a chore and a burden, necessary for the income provided but not something they look forward to every day—to the notion of an occupation. Let’s replace jobs with occupations and professions. Give some meaning to income-earning activities, which will allow workers to have pride in their accomplishments.

It is also important to pay attention to the letter A of AI. The A stands for “artificial.” Artificial intelligence and human intelligence are quite different, and that diversity can either be threatening or, if used properly, a source of powerful enhancement of people’s abilities.

Some of you may remember when arithmetic and algebraic calculators became inexpensive and commonplace. The big fuss in schools was whether students would be allowed to use them. After all, the argument went, if students could always use a calculator, they would lose their arithmetic skills. Well, that fuss is over—mostly. Children still learn arithmetic, but when it comes to doing it, they (and most adults) turn to calculators. I use my computer’s calculator frequently. Sure, I know how to add and subtract, multiply and divide, but I make errors. So why not use the computer?

In solving algebraic equations or even the integral or differential equations of calculus, wh not use a calculator? When I was taught in college, we used to look up the answer to integrals in handbooks that had been compiled by hand, so they had errors in them, and even they didn’t give all of the answers because it is impossible for a handbook to have every conceivable differential or integral equation. So it took a lot of manipulation to get an equation into a format that matched one of the equations in the handbook. Today we just type the equation into one of the many computer programs that can quickly solve the equation.
Worse, I had to learn logarithms and use big, clumsy tables of values to do complex operations. What a relief today that we no longer have to do things that way. I still have a collection of slide rules—mechanical devices used to find arithmetic and trigonometric solutions by sliding a wood rod back and forth beneath a movable glass cursor, which provided answers because the scales on the slide rule were laid out logarithmically. Note that the precision was limited to three or four digits—with no decimals: those had to be calculated mentally. I once took a course at MIT on the design of rockets in which a homework problem might take a full week to complete, not because it was difficult to understand but because we had to do so many calculations. We invariably got the wrong answers, not because we didn’t understand what we were doing but because there were so many calculations that we were bound to make errors in some, and then each error would propagate through the problem. Today, the very same problem can be solved in 30 minutes using a calculator or, better yet, a computer program. Today, the use of computer tools and AI allows students to solve far more complex problems than my class could do and, moreover, lets the students concentrate on the science behind the issues, not just on the mechanics of grinding out numerical answers.

Is it bad to use a computer to solve our equations for us? No! This tool allows engineers to concentrate on the real problems, to try multiple alternatives, to rethink assumptions—to be engineers instead of wasting their time and energy on hours and days of dull, tedious, nonproductive mathematical equation crunching. Solving the equations? That involves mechanical operations best left to machines.
However, just because the computer can solve the problem doesn’t mean that engineers are out of work. On the contrary, engineers are necessary to develop the equations in the first place and then to evaluate the answer to determine if it actually meets their needs. If not, they must go back to their equations and reexamine the assumptions they made in developing them and then try again. When we had to solve the equations manually, it might take hours for each interaction: today each be done in minutes. This speed leads to improved solutions, including solutions to problems that couldn’t have been solved before. Replacing the mechanical tedium of solving equations lets engineers focus on their real skills: formulating the equations and interpreting the results. The advanced tools changed the jobs in a good way.

A person plus an intelligent machine can have enhanced abilities, just as the power of engineers is enhanced when they use computers and calculators. I see the same for many activities. John Markoff, the New York Times technology columnist, wrote a wonderful book on this topic, Machines of Loving Grace, reviewing the history of machines and technology and suggesting that the real benefits will come when instead of doing AI, we do IA—not artificial intelligence but intelligence amplification. Yes, augmenting and amplifying human capabilities, becoming collaborators and assistants. The National Academies of Sciences, Engineering, and Medicine did a study of the use of AI and concluded that “there will be an increased need for AI systems to function effectively as teammates with humans.” Moreover, they continued, “when considering an AI system as a part of a team, rather than simply a tool capable of limited actions, the need for a framework for improving the design of AI systems to enhance the overall success of human-AI teams becomes apparent.”
We live in an artificial world where we can no longer survive without the artificial artifacts that govern our lives. But today, technology often has priority, forcing people to behave according to its artificial and arbitrary requirements. It is time to change this perspective, time to put people first, to allow and encourage people to do the activities they want to do and have technology do whatever the tasks they don’t. What tasks? They are known as the three D’s: dull, dangerous, or dirty tasks.


From:

Norman, Don. (2003). Design for a Better World: meaningful, sustainable, humanity centered. MIT Press


Addendum: A Brief Video on AI as collaborator..

You might enjoy this short, 2 1/2 minute video on how the new, generative design tools from AI are valuable collaborators for human activity. This video is an extract from a course for the Interaction Design Institute. I can’t embed the video here because it is on the IxDF site, but you can see it there:
https://www.interaction-design.org/literature/topics/ai