News & Perspectives

Loving & Hating Our AIs

Loving & Hating Our AIs

From Siri to Barbie
Perspective// Posted by: Jeremy Sherman / 12 Apr 2016
Barbie

When asked if she’s a real person, iPhone’s Siri says, “Sorry, I’ve been advised not to discuss my existential status.” Don’t you hate when people won’t answer a simple question?

Siri is limited, but that may be a relief to people who fear that AI spells our doom, a surprising fear given that non-AI technology already poses that threat. Dumb nukes and combustion engines are already enough to do us in.

Perhaps what we really fear is that ever-expanding AI technology will take over our unique human role—devastating our spirits, taking our jobs, stealing our data and generally encroaching on our unique human status. Religions have banked on our divine status as God’s unique wunderkind. It would be bad enough to discover that God is two-timing us with intelligent life elsewhere in the universe. But to find it in our shirt pockets—well, that’s too close to home.

Archimedes didn’t wonder whether his levers and screws were human. Descartes concluded that even animals were no different from clockwork automatons (humans remained a breed apart). Unlike animals we think, therefore I am. And humans think in words. But AI uses
words, too.

We need not look skyward for intelligent life muscling in on our scene. A playroom will do. The new AI-endowed Barbie learns her enchanted owner’s name, and weaves it into conversation with bubbly enthusiasm. She takes a page from Dale Carnegie, who advised that—to win friends and influence people—we need only repeat their names a lot. The strategy even works in a toy, convincing us that we are engaging with a real person. (Recent reports indicate that, like the spy who loved me, sweet Barbie may be able to gather our private data while charming
our children.)

AI wouldn’t be a threat if we didn’t love and embrace it. But we do—as an extension of all innovative automation from Archimedes onward, it streamlines our lives. Because the human mind is not a computer; it’s a computer programmer. We unconsciously automate familiar behavior patterns, which is why we can multi-task while driving. Driverless cars.simply externalize our efficiency.

Machines that converse give us the enchanting, disturbing impression of being human, but they aren’t. They’re still just elaborate collections of electronic levers, heaving zeros and ones around. Without the human minds that program them, they’re mere mechanisms.

Some believe that humans are mere mechanisms, too. Assemble enough processing power, and out pops a sentient being. But this is a dubious argument at best. Are computers getting more intelligent? We might just as well ask whether Pixar films are getting more intelligent. Without humans programming (and watching) them, they’re just a lot of digital switches.

Were Siri less evasive, she would sing of herself what that most human of humans, Walt Whitman sang of himself: “I am large, I contain multitudes.” Siri, Barbie and all AI represent multitudes of human programmers, downloading tasks that real people have learned to delegate or automate.

AI anxiety finds its fullest flower in fear of the Singularity: the point at which our redoubling computer connections become sentient enough to overtake humans.

In my view, Artificial Unintelligence is a far more pressing concern. Loose nukes, automated stock market crashes, pollutant-spewing factories, the sea-borne trash gyre and faith-based terrorism are far more terrifying than truly intelligent.machines.

It’s not AI we have to fear — it’s shortsighted humans who delegate our worst habits to obedient machines.

Jeremy Sherman
Jeremy Sherman, Ph.D, M.P.P. writes a popular blog column for Psychology Today called Ambigamy, and works with UC Berkeley’s Terrence Deacon on the origins of life. He is currently completing a book titled Neither Ghost Nor Machine for Columbia University Press.