In 1928, The New York Times ran an article titled “March of the Machine Makes Idle Hands,” which featured experts predicting that a new invention—factory machinery that ran on electricity—would soon make manual labor obsolete.
Marvin Minsky, the MIT researcher typically credited as the father of artificial intelligence, was reported to have said in 1970 that “in from three to eight years we will have a machine with the general intelligence of an average human being.”
The pandemic has shown us some of the benefits of automation more clearly than any Davos panel could have.
Machines, it turned out, could not offer an adequate substitute for human connection, or give us what we needed to get ahead. And maybe they never will.
My biggest problem with the mainstream AI debate, though, is that both sides tend to treat technological change as a disembodied natural force that simply happens to us, like gravity or thermodynamics.
The machine’s danger to society is not from the machine itself but from what man makes of it.
Technology has progressed nonstop for 250 years, and in the U.S. unemployment has stayed between 5 to 10 percent for almost all that time, even when radical new technologies like steam power and electricity came on the scene.
History suggests that while periods of technological change often improve conditions for elites and capital owners, workers don’t always experience the benefits right away.
After the onset of the Industrial Revolution in the 1760s, for example, Britain’s gross domestic product and corporate profits soared almost immediately, but it took more than fifty years, by some estimates, for the real wages of British workers to rise. (This gap, which was described by Friedrich Engels in “The Condition of the Working Class in England,” is known among economists as “Engels’s Pause.”)
...what the optimists miss is that we don’t live in the aggregate, or over the long term. We experience major economic shifts as individuals with finite careers and lifespans, and for many people, technological change hasn’t always resulted in better material conditions during their lives.
In study after study, researchers have found that, after reaching a certain performance threshold, AI tends to outperform not only humans, but human-AI teams.
In 1962, Yehoshua Bar-Hillel, an Israeli mathematician and language expert, dismissed the idea that computers could be taught to translate foreign languages, writing that “there is no prospect whatsoever that the employment of electronic digital computers in the field of translation will lead to any revolutionary changes.”
My favorite bad machine prediction came in 1984, when The New York Times ran a story about the introduction of automated ticket machines at airports. The article quoted experts who were very, very skeptical that computers would ever replace human travel agents. The owner of one travel agency was quoted as saying, “What happens if you just press the wrong button?”
A 2017 Gallup survey found that although 73 percent of U.S. adults believed that AI will “eliminate more jobs than it creates,” only 23 percent worried about losing their own jobs. All over the world, in every profession, smart people seem to have simultaneously convinced themselves that (a) AI is a massively powerful technology that will be capable of performing even complex jobs with superhuman efficiency, and (b) a machine will never, ever be able to do what they do.
When it comes to avoiding machine replacement, what we do is much less important than how we do it.
Some technologies come in disguise…they do not look like technologies, and because of that they do their work, for good or ill, without much criticism or even awareness.
This kind of one-for-one substitution still happens occasionally, like in 2019, when Walmart brought in a fleet of floor-cleaning robots and simultaneously laid off hundreds of human janitors.
“It’s not that people are getting fired,” he told me. “It’s that when they leave, there’s less and less urgency to replace them immediately. We just don’t need that many people anymore.”
This dynamic is part of what the technology writer Brian Merchant calls the “invisible automation” problem. Merchant writes that “automation does not appear to immediately and directly send workers packing en masse.” Instead, he says, its effects often appear gradually, in the form of pay cuts, unfilled openings, and higher turnover.
For example, one Ant Group affiliate, MYbank, is a lending app whose signature procedure is known as “3-1-0” because of what it requires: three minutes to apply for a loan, one second for an algorithm to approve it, and zero humans. The bank has lent out hundreds of billions of dollars this way, and thanks to the consumer data it collects from Alibaba and other partners, it has kept its default rate down around 1 percent, well below that of many traditional lenders.
...we often realize that the technology that looked innocent and helpful when we first encountered it ended up having a more destructive effect.
We worry about Skynet, not spreadsheets. And when the change arrives, we’re often caught by surprise.
For now, as strange as it sounds, we may want to stop worrying about killer droids and kamikaze drones, and start worrying about the mundane, mediocre apps and services that allow companies to process payroll 20 percent more efficiently, or determine benefits eligibility with fewer human caseworkers. I believe, as experts like Eubanks and Le Clair do, that we underestimate boring bots at our peril.
Why is it every time I ask for a pair of hands, they come with a brain attached?
...if you could write a user’s manual for your job, give it to someone else, and that person could learn to do your job as well as you in a month or less, you’re probably going to be replaced by a machine.
How many of my beliefs and preferences were actually mine, I wondered, and how many had been put there by machines?
In his book The Power of Human, Adam Waytz, a psychologist and professor at Northwestern University’s Kellogg School of Management, runs down a laundry list of studies showing that people greatly prefer goods and experiences that have obvious human effort behind them, even when the goods and experiences are identical to those produced by machines.
In their book The Experience Economy, business school professors B. Joseph Pine II and James H. Gilmore write about the way certain enterprises move through the “progression of economic value.” They start out selling commodities, eventually begin selling goods, morph into providing services, and ultimately end up designing experiences.
I still think about Messina’s tweet all the time—in particular, the last sentence: “Humans quickly becoming expensive API endpoints.”
As the journalist Martin Ford writes in his book Rise of the Robots, “If you find yourself working with, or under the direction of, a smart software system, it’s probably a pretty good bet that—whether you’re aware of it or not—you are also training the software to ultimately replace you.”
Flawed AI often disproportionately impacts marginalized people, because the data used to train the algorithms is often drawn from historical sources that reflect their own patterns of bias.
A 2019 report by the World Economic Forum estimated that of the workers who will be fully displaced by automation in the next decade, only one in four can be successfully reskilled by private-sector programs.
One of my favorite small-web stories is that of the Rural Electrification Administration—the New Deal agency created by the Roosevelt administration in the 1930s to bring electric power to rural parts of the country—which held community-wide ceremonies each time it turned on the electricity for the first time in a new town. Getting electricity was a big, life-changing event for rural communities. It allowed farmers to dispense with heavy labor, extend the farming day by several hours, and produce bigger crop yields. According to the historian David E. Nye, these ceremonies often turned into lively community parties, complete with speeches from local politicians and mock “funerals” in which an oil lamp was buried underground, to symbolize the death of an old technology and the arrival of a new one.
Most of these systems, I believe, were not intentionally designed to create harm. Instead, I think their founders and engineers were idealists who thought that having good intentions mattered more than producing good outcomes.
We are all afraid—for our confidence, for the future, for the world. That is the nature of the human imagination. Yet every man, every civilization, has gone forward because of its engagement with what it has set itself to do.
I think we have a moral duty to fight for people, rather than simply fighting against machines.
History shows us that those who simply oppose technology, without offering a vision of how it could be made better and more equitable, generally lose.