Elon Musk told the National Governors Association over the weekend that “AI is a fundamental risk to the existence of human civilization, in a way that car accidents, airplane crashes, faulty drugs, or bad food were not.”
The man knows how to get attention. His words were carried within hours by outlets ranging from NPR to Architectural Digest. Many piled on to explain why he was wrong. Reason.com reviled him for suggesting regulation instead of allowing free markets to work their magic. And a cast of AI experts took him to task for alarmism that had no basis in their technical experience.
It’s worth examining this conflict. Some wonder as to Musk’s motivation; others think he’s angling for a government grant for OpenAI, the company he backed to explore ethical and safe development of AI. It is a drum Musk has banged repeatedly, going back to his 2015 $10 million donation to the Future of Life Institute, an amount that an interviewer lauded as large and Musk explained was tiny.
I’ve heard the objections from the experts before. At the Canadian Artificial Intelligence Association’s 2016 conference, the reactions from people in the field were generally either dismissive or perplexed, but I must add, in no way hostile. When you’ve written every line of code in an application, it’s easy to say that you know there’s nowhere in it that’s going to go berserk and take over the world. “Musk may say this,” started a common response, “but he uses plenty of AI himself.”
There’s no question that the man whose companies’ products include autonomous drone ships, self-landing rockets, cars on the verge of level 4 autonomy, and a future neural lace interface between the human brain and computers is deep into artificial intelligence. So why is he trying to limit it?
A cynical evaluation would be that Musk wants to hobble the competition with regulation that he has figure out how to subvert. A more charitable interpretation is that the man with more knowledge of the state of the art of AI than anyone else has seen enough to be scared. This is the more plausible alternative. If your only goal is to become as wealthy as possible, picking the most far-out technological challenges of our time and electing to solve them many times faster than was previously believed possible would be a dumb strategy.
And Elon Musk is anything but dumb.
Over a long enough time frame, what Musk is warning about is clearly plausible, it’s just that we can figure it will take so many breakthroughs to get there that it’s a thousand years in the future, a distance at which anything and everything becomes possible. If we model the human brain from the atoms on up then with enough computational horsepower and a suitable set of inputs, we could train this cybernetic baby brain to attain toddlerhood.
We could argue that Musk, Bill Gates, and Stephen Hawking are smart enough to see further into the future than ordinary mortals and therefore are exercised by something that’s hundreds of years away and not worth bothering about now. Why the rogue AI scenario could be far less than a thousand years in the future is a defining question for our time. Stephen Hawking originally went on record as saying that anyone who thought they knew when conscious artificial intelligence would arrive didn’t know what they were talking about. More recently, he revised his prediction of the lifespan of humanity down from 1000 years to 100.
No one can chart a line from today to Skynet and show it crossing the axis in 32 years. I’m sorry if you were expecting some sophisticated trend analysis that would do that. The people who have tried include Ray Kurzweil and his efforts are regularly pilloried. Equally, no one should think that it’s provably over, say, twenty years away. No one who watched the 2004 DARPA Grand Challenge would think that self-driving cars would be plying the streets of Silicon Valley eight years later. In 2015 the expectation of when a computer would beat leading players of Go was ten years hence, not one. So while we are certainly at least one major breakthrough away from conscious AI, that breakthrough may sneak up on us quickly.
Two recommendations. One, that we should be able to make more informed predictions of the effects of technological advances, and therefore, we should develop models that today’s AI can use to tell us. Once, people’s notion of the source of weather was angry gods in the sky. Now we have supercomputers executing humungous models of the biosphere. It’s time we constructed equally detailed models of global socioeconomics.
Two, because absence of proof is not proof of absence, we should not require those warning us of AI risks to prove their case. This is not quite the precautionary principle, because attempts to stop the development of conscious AI would be utterly futile. Rather, it is that we should act on the assumption that conscious AI will arrive within a relatively short time frame, and decide now how to ensure it will be safe.
Musk didn’t actually say that his doomsday scenario involved conscious AI, although referring to killer robots certainly suggests it. In the short term, merely the increasingly sophisticated application of artificial narrow intelligence will guarantee mass unemployment, which qualifies as civilization-rocking by any definition. See Martin Ford’s The Lights in the Tunnel for an analysis of the economic effects. In the further term, as AI grows more powerful, even nonconscious AI could wreak havoc on the world through the paperclip hypothesis, unintended emergent behavior, and malicious direction.
To quote Falstaff, perhaps the better part of valor is discretion.