News & Perspectives

AI: An Altogether Different Animal

AI: An Altogether Different Animal

An interview with David Eagleman
Perspective// 26 Apr 2016
ArtificialIntelligence, AI, Existential Risk, machine learning, Robotics

David EaglemanDavid Eagleman is one of those rare writers who’s as likable in person as he is in his books. His 20-year career as a neuroscientist has been unusual; my personal introduction to his work was his 2010 book Sum: Forty Tales from the Afterlives, which combined the bite-sized brilliance of Calvino’s Invisible Cities with the wry pathos of Borges. Eagleman was recently the writer and presenter of The Brain, a six-part PBS television series that beautifully illuminates “the most complex object we’ve discovered in the universe.” Eagleman holds joint appointments in the Departments of Neuroscience and Psychiatry at Baylor College of Medicine in Houston, Texas. His key research touches time perception, vision, synesthesia, and the intersection of neuroscience with law. Along with Sum, his books include The Brain: The Story of You (2015) and Incognito: The Secret Lives of the Brain (2012).

ENTER: In 2012, you famously pointed out that after 50 years of throwing the smartest people on the planet at AI, “all we have to show for it is the Roomba vacuum cleaner.” It’s four years later. Do you want to amend that statement?

DAVID EAGLEMAN: There's a lot of confusion on what people mean by AI. There are obviously many examples of artificial intelligence that are better than we are at certain tasks. Your basic computer can do floating point arithmetic 33 billion times faster than we can—but that doesn’t mean it’s intelligent in the way a human is.

From what I’ve seen so far, we don’t yet have an artificial generalized intelligence. There’s no denying we have very smart AI—but only for very particular tasks. Google Maps is an incredible AI, but it’s not going to discuss Shakespeare with us, or tell us who it plans to vote for in the presidential election. It’s a bunch of code, and doesn’t have its own opinions about anything.

We’re still trying to unlock the secrets of the brain—the world is full of neuroscientists working on this around the clock. But if I have to make a prediction, we’re many years away from having an AI system that’s actually like a human. It’ll be way outside our lifetime—more like 500 years from now—when we’ll have a Blade Runner “replicant” or a C-3PO.

Does having an opinion make you human? Tweet this

ENTER: How is the field of neuroscience affecting our approach to artificial intelligence?

DE: It complexifies it. Every time we look at the brain, we realize that the secrets implemented by Mother Nature are way beyond what we’re up to. In my book Incognito, I talk about all the stuff our unconscious brain does that we’re not even aware of. And in the years since that book was published, not a day goes by when I don’t realize it’s even more complex than I even dreamed.

Here’s an example: One of the things I outlined in that book is the “team of rivals” hypothesis: You’ve got competing drives that are constantly trying to steer your behavior. A sort of neural “parliament” is at work under the hood—and at different times, different “political parties” win.

This is why humans can be so nuanced and conflicted. If I place a warm, fresh cookie in front of you, part of your brain wants it. Part of your brain says, “Don’t eat that, it’s going to make you fat!” Part of you contracts with yourself: “If I eat the cookie now, I’ll go to the gym later.” And so on. The question is, who’s talking to whom? It’s all you, but it’s different parts of you. Building a machine made of conflicting parts is a basic principle that AI researchers have not yet figured their way around. But it’s fundamental to how the human brain operates.

ENTER: You and I have grown up with the idea of the “singularity.” What does this mean to neuroscientists today?

DE: I don’t know if there’s a unified opinion among neuroscientists. My own opinion is that it’s going to be spread out; it’s not going to be a “moment.” We will make very specific AIs that are better than humans at particular tasks. And it’s not very far off—if it hasn’t happened already—that they will be able to start manipulating their own code to make themselves even better at their task.

Whether we’ll have a human-like C-3PO seems to me a distant prospect, for many reasons. One of those reasons is that a giant part of what our brain does is manage hunger, cold, maintaining a heartbeat, and so on—things that AI systems don’t have to deal with. So the question arises: do you need all those biological responsibilities to have a human-like intelligence? It’s an open question. If you held a gun to my head, I’d guess that, yes: In order to really model what it is to be human, you need all those things.

ENTER: Much science fiction takes for granted that future AI will have egos; a sense of self. But I wonder if our sense of self is related to the DNA imperative to survive and replicate ourselves. Without DNA, AIs won’t have, or need, a sense of “I.”

DE: That’s true—but at least in the case of reproduction, it takes about 13 years for humans to get there. The sense of “I” is something that develops through time as we learn that our brain controls all our different limbs, gets feedback from our sensory organs, and gets to determine our next course of action.

So the “I,” or the “me,” is not necessarily something in our DNA—though DNA is requisite to get there. Instead, the sense of “I” develops from experience, as a sense of, “okay, I get to control this huge system made of trillions and trillions of cells.” Our conscious mind is sort of a mission control center, taking feedback from different parliaments. That’s where we get the sense of “I.”

And it could be that that’s the easiest part of creating an AI. The computer gets to control certain things, though the rest of the world is completely out of its control. It develops a sense of what it can control, and what it can’t control. And so it develops a sense of “I,” even though it’s made of trillions of transistors, not DNA.

ENTER: What’s your personal Turing test?

DE: I find computers so easy to fool! The most advanced AI systems are at the level of a one-year-old child or something. If I say something like, “Okay, name five animals that can run faster than you can ride a bike,” that’ll completely throw a computer off. Or, “When Barack Obama walks into a room, does his head come with him?” Those are questions a 4-year-old can answer, but are impossible for any AI system. So my Turing test would start with very simple questions like that.

When Barack Obama walks into a room, does his head come with him? @davideagleman's AI Turing test Tweet this 

ENTER: Do you believe the definition of personhood will ever extend to machines?

DE: I have no problem applying personhood to something made of transistors; but I don’t think that’s the path
we’re on.

AI systems—as they are right now—are completely different from us. The fact that they’re so unbelievably good at math, or web searches, or movie recommendations, and that they are doing this in a superhuman way, means that they’re not human. And in many other ways, they’re subhuman. They can’t get a joke, or answer those questions I posed before.

In order to say, “Okay, this machine is a person,” you would basically need to replicate a person, with a person’s needs and concerns and foibles, in a machine. Which is not the goal of AI. I mean, it would be trillions of dollars, for what? Just so you could have Fred, now, as a machine?

ENTER: Talking about this with you, I think of that quote by Isaac Newton: We’re like kids playing with shells on the sea-shore, while the great ocean of truth lays undiscovered around us. We barely know where we’re going with AI.

DE: That’s right. We’ve already got many examples of AI that are extraordinary, but they're nothing like people—nor are they meant to be. The fact that I can search trillions of web pages, and the one I want comes up right away, is amazing! Artificial intelligence is going off in these various unexpected directions. Google is a great example of a specialized AI; it can perform a huge variety of tasks, from mapping to searching to translating. But I wouldn’t think for one second to call it a “person.”

AI examples are extraordinary "but they're nothing like people—nor are they meant to be," says @davideagleman Tweet this

ENTER: I wonder if what the future might hold is more distributed intelligence—like the Internet of Things. Our devices and the objects around us may embody an intelligence about what we need, but they won't be free-standing. There may not be a need for androids at all.

DE: Exactly. The idea of having an android robot, like a C-3PO or a Blade Runner replicant, comes from the pre-Internet era. Back then, it seemed like a really good idea to spend a trillion dollars to build something like a physical, human-like robot. I wouldn’t think that looks like such a great investment now. We can already strap on VR glasses and get into our screens. Actually creating a physical android, and having to attend to its toilet—fixing its wires, or replacing burned-out transistors—who the heck wants that? That concept comes from an earlier time.