Less than a week after Elon Musk warned the National Association of Governors about the risks of artificial intelligence, he got in a very public dust-up with Mark Zuckerberg, who thought Musk was being “pretty irresponsible.” Musk retorted that Zuckerberg’s understanding of the topic was “limited.”
This issue pops up with such regularity as to bring joy to the copyright holders of Terminator images. But neither of these men is a dummy, and they can’t both be right… right?
We need to unpack this a little carefully. There is a short term, and a long term. In the short term (the next 10-20 years), while there will be many jobs lost to automation, there will be tremendous benefits wrought by AI, specifically Artificial Narrow Intelligence, or ANI. That’s the kind of AI that’s ubiquitous now; each instance of it solves some specific problem very well, often better than humans, but that’s all it does. This is of course true on the face of it of computers ever since they were invented, or there would have been no point; from the beginning they were better at taking square roots than a person with pencil and paper.
But now those skills include tasks like facial recognition and driving a car, two abilities that we cannot even explain adequately how we do them ourselves, but never mind; computers can be trained by showing them good and bad examples and they just figure it out. They can recognize faces better than humans now, and the day when they are better drivers than humans is not far off.
In the short term, then, the effects are unemployment on an unprecedented scale as 3.5 million people who drive vehicles for a living in the USA alone are expected to be laid off. The effects extend to financial analysts making upwards of $400k/year, whose jobs can now be largely automated. Two studies show that about 47% of work functions are expected to be automated in the short term. (That’s widely misreported as 47% of jobs being eliminated with the rest left unmolested; actually, most jobs would be affected to varying degrees, averaging to 47%.) Mark Cuban agrees.
But, there will be such a cornucopia bestowed upon us by the ANIs that make this happen that we should not impede this progress, say their proponents. Cures for diseases, dirty risky jobs given to machines, and wealth created in astronomical quantities, sufficient to take care of all those laid-off truckers.
That is true, but it requires that someone connect the wealth generated by the ANIs with the laid-off workers, and we’ve not been good at that historically. But let’s say we figure it out, the political climate swings towards Universal Basic Income, and in the short term, everything comes up roses. Zuckerberg: 1, Musk: 0, right?
Remember that the short term extends about 20 years. After that, we enter the era where AI will grow beyond ANI into AGI: Artificial General Intelligence. That means human-level problem solving abilities capable of being applied to any problem. Except that anything that gets there will have done so by having the ability to improve its own learning speed, and there is no reason for it to stop when it gets on a par with humans. It will go on to exceed our abilities by orders of magnitude, and will be connected to the world’s infrastructure in ways that make wreaking havoc trivially easy. It takes only a bug—not even consciousness, not even malevolence—for something that powerful to take us back to the Stone Age. Fortunately, history shows that Version 1.0 of all significant software systems is bug-free.
Oops.
Elon Musk and I don’t want that to be on the cover of the last issue of Time magazine ever published. Zuckerberg is more of a developer and I have found that it is hard for developers to see the existential risks here, probably because they developed the code, they know every line of it, and they know that nowhere in it resides the lines
if ( threatened ) {
wipe_out_civilization();
}
Of course, they understand about emergent behavior; but when they’ve spent so much time so close to software that they know intimately, it is easy to pooh-pooh assertions that it could rise up against us as uninformed gullibility. Well, I’m not uninformed about software development either. And yet I believe that it could be soon that we are developing systems that does display drastic emergent behavior, and that by then it will be too late to take appropriate action.
Whether this cascade of crisis happens in 20 years, 15, or 30, we should start preparing for it now before we discover that we ought to have nudged this thing in another direction ten years earlier. And since it requires a vastly elevated understanding of human ethics, it may well take decades to learn what we need to make our AGIs have not just superintelligence, but supercompassion.