Category: Warfare

Artificial IntelligenceTechnologyThe SingularityWarfare

Timeline For Artificial Intelligence Risks

The debate about existential risks from AI is clouded in uncertainty. We don’t know whether human-scale AIs will emerge in ten years or fifty. But there’s also an unfortunate tendency among scientific types to avoid any kind of guessing when they have insufficient information, because they’re trained to be precise. That can rob us of useful speculation. So let’s take some guesses at the rises and falls of various AI-driven threats.  The numbers on the axes may turn out to be wrong, but maybe the shapes and ordering will not.

Screen Shot 2017-12-03 at 5.33.51 PM

The Y-axis is a logarithmic scale of number of humans affected, ranging from a hundred (102) to a billion (109). So some of those curves impact roughly the entire population of the world. “Affected” does not always mean “exterminated.” The X-axis is time from now.

We start out with the impact of today’s autonomous weapons, which could become easily-obtained and subverted weapons of mass assassination unless stringent controls are adopted. See this video by the Future of Life Institute and the Campaign Against Lethal Autonomous Weapons. It imagines a scenario where thousands of activist students are killed by killer drones (bearing a certain resemblance to the hunter-seekers from Dune). Cheap manufacturing with 3-D printers might stretch the impact of these devices towards a million, but I don’t see it easy enough for average people to make precision-shaped explosive charges to go past that.

At the same time, a rising tide of unemployment from automation is projected by two studies to affect half the workforce of North America and by extension, of the developed world, in ten to twenty years. An impact in the hundreds of millions would be a conservative estimate. So far we have not seen new jobs created beyond the field of AI research, which few of those displaced will be able to move into.

Starting around 2030 we have the euphemistically-labeled “Control Failures,” the result of bugs in the specifications, design, or implementation of AIs causing havoc on any number of scales. This could culminate in the paperclip scenario, which would certainly put a final end to further activity in the chart.

The paperclip maximizer does not require artificial consciousness – if anything, it operates better without it – so I put the risk of conscious AIs in a separate category starting around 20 years from now. That’s around the median time predicted by AI researchers for human scale AI to be developed. Again, “lives impacted” isn’t necessarily “lives lost” – we could be looking at the impact of humans integrating with a new species – but equally, it might mean an Armageddon scenario if conscious AI decides that humanity is a problem best solved by its elimination.

If we make it through those perils, we still face the risk of self-replicating machines running amok. This is a hybrid risk combining the ultimate evolution of autonomous weapons and the control problem. A paperclip maximizer doesn’t have to end up creating self-replicating factories… but it certainly is more fun when it does.

Of course, this is a lot of rampant speculation – I said as much to begin with – but it gives us something to throw darts at.

Artificial IntelligenceExistential RiskTechnologyWarfare

Is Skynet Inevitable?

Australia’s leading AI expert is afraid the army is creating ‘Terminator’, and no, that’s not taking him out of context:

I mean Hollywood is actually pretty good at predicting the future. And if we don’t do anything, then in 50 to 100 years time it will look much like Terminator. It wouldn’t be that dissimilar to what Hollywood is painting… there will be a lot of risk well before then, in fact.

Like so many scenarios I discuss in Crisis of Control, the question is not if, but when. 50 to 100 years away may be a timeframe that lulls us into a false sense of security when the real question is, how far in advance do we need to act to prevent this scenario?

That we will one day create “Skynet” is all but inevitable once you accept that we will develop conscious artificial intelligence (CAI). I go into the reasons why that will happen relatively soon in Crisis. The military is certain to develop CAI for the tremendous tactical leverage it grants. However, the hard take-off in intelligence levels and random mutations (which are all but assured in anything possessing creativity) will mean that at some point a CAI will evolve a motivation to control the real world machinery it is connected to… or can reach.

This is the point where many people argue that “we can just unplug it” and make sure it’s not connected to, say, weaponry.  The “just unplug it” argument has been disposed of before (basically, try unplugging Siri and see how far you get). The odds of successfully air gapping an AI from a network it wants to talk to are tiny; we had genetic algorithms that learned how to communicate across an air gap in 2004.

The answer that Crisis embraces is that we should develop ethical AI in the public domain first, so that it is in a position to dominate the market or defeat any unethical AI that may later arise. This is why we support the OpenAI initiative. It may sound naïve and reckless, but I believe it is the best chance we’ve got.