News & Perspectives

Life As We Don’t Know It

Life As We Don’t Know It

Perspective// Posted by: Christine Mason / 12 Apr 2016

“In 1925, it would have been pretty hard to explain to someone the danger of nuclear weapons, when they were just an idea. And that’s where we are with Artificial Intelligence… This might actually happen in our lifetime. So we should think about it now.”

MIT professor and cosmologist Max Tegmark is one of the founders of The Future of Life Institute, a research group working to understand and mitigate existential risks facing humanity. Their current focus? The development of human-level artificial intelligence, which board member Elon Musk says is the human race’s biggest existential threat.

While FLI says their mission involves “safeguarding life and developing optimistic visions of the future,” another board member, Stephen Hawking, posits: “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”

But the foundational existential question is whether or not humanity actually “exists” at all. According to Oxford philosopher Nick Bostrum—it is almost a mathematical certainty that we are an “artificial intelligence,” living in another sentient being’s computer simulation.

Bostrum’s theory, of course, need not have any impact on how we live and operate in the world. Ultimately, if we can touch something and experience it as real, whether it was created by computer or whittled from wood makes no subjective difference.

Economist Robin Hanson argues that one should try to be as interesting as possible, so simulation designers are more likely to keep you around for the next simulation they create.

In building our own AI, engineer Nell Watson says it is our very humanity we should look to recreate: “The most important work of our lifetime is to ensure that machines are capable of understanding human values,” she states. “It is those values that will ensure machines don’t end up killing us out of kindness.”

So while Tegmark and his contemporaries continue to explore safeguards to ensure human survival, perhaps now is the time for us to apply the Golden Rule to our increasingly sentient creations—and hope they apply it to us in the future.