The debate about existential risks from AI is clouded in uncertainty. We don’t know whether human-scale AIs will emerge in ten years or fifty. But there’s also an unfortunate tendency among scientific types to avoid any kind of guessing when they have insufficient information, because they’re trained to be precise. That can rob us of useful speculation. So let’s take some guesses at the rises and falls of various AI-driven threats. The numbers on the axes may turn out to be wrong, but maybe the shapes and ordering will not.
The Y-axis is a logarithmic scale of number of humans affected, ranging from a hundred (102) to a billion (109). So some of those curves impact roughly the entire population of the world. “Affected” does not always mean “exterminated.” The X-axis is time from now.
We start out with the impact of today’s autonomous weapons, which could become easily-obtained and subverted weapons of mass assassination unless stringent controls are adopted. See this video by the Future of Life Institute and the Campaign Against Lethal Autonomous Weapons. It imagines a scenario where thousands of activist students are killed by killer drones (bearing a certain resemblance to the hunter-seekers from Dune). Cheap manufacturing with 3-D printers might stretch the impact of these devices towards a million, but I don’t see it easy enough for average people to make precision-shaped explosive charges to go past that.
At the same time, a rising tide of unemployment from automation is projected by two studies to affect half the workforce of North America and by extension, of the developed world, in ten to twenty years. An impact in the hundreds of millions would be a conservative estimate. So far we have not seen new jobs created beyond the field of AI research, which few of those displaced will be able to move into.
Starting around 2030 we have the euphemistically-labeled “Control Failures,” the result of bugs in the specifications, design, or implementation of AIs causing havoc on any number of scales. This could culminate in the paperclip scenario, which would certainly put a final end to further activity in the chart.
The paperclip maximizer does not require artificial consciousness – if anything, it operates better without it – so I put the risk of conscious AIs in a separate category starting around 20 years from now. That’s around the median time predicted by AI researchers for human scale AI to be developed. Again, “lives impacted” isn’t necessarily “lives lost” – we could be looking at the impact of humans integrating with a new species – but equally, it might mean an Armageddon scenario if conscious AI decides that humanity is a problem best solved by its elimination.
If we make it through those perils, we still face the risk of self-replicating machines running amok. This is a hybrid risk combining the ultimate evolution of autonomous weapons and the control problem. A paperclip maximizer doesn’t have to end up creating self-replicating factories… but it certainly is more fun when it does.
Of course, this is a lot of rampant speculation – I said as much to begin with – but it gives us something to throw darts at.