From May 30 to June 3 I was at the 29th Canadian Conference on Artificial Intelligence at the University of Victoria, British Columbia. This is the academic side of AI, where you’ll find lots of math, theory, and the bleeding edge of advances in the algorithms that drive the world’s artificial intelligences. I spent the week chatting with professors and videoing some excellent interviews that I will be editing for publication here later.
I learned that the first conference on Artificial Intelligence was held by the Canadian Artificial Intelligence Association, in 1973. That the people in the field are aware of the alarm raised by Hawking, Gates, and Musk, but can’t see how to apply it to their work – as Michael Bowling told me, “What am I supposed to do with that? It’s not like I should put ‘#include <ethics.h>’ in my code.”
Professor Bowling created a perfect poker-playing bot; before you clamor to download it to your phone for that next trip to Vegas, be aware of a few caveats. It’s only provably perfect in the game of heads-up, limit Texas Hold ‘Em (although it also performs well against more players), he’s not releasing the code, and if you consult a smartphone while playing a casino game you’ll be bounced out of there quicker than a deadbeat who just lost his last chip.
But the program does neatly illustrate the quandary of people in the field with respect to the well-known existential alarms. Their programs have–currently–narrow, specific applications. A poker-playing bot, as smart as it may seem to a human poker player, is not a threat to humanity. Its creators know every piece of code in it and nowhere is there a line that is in danger of subjugating the human race. The same observation applies to AlphaGo, the program that beat the human Go champion ten years ahead of expectations. How are AI coders supposed to react to those pleas for circumspection?
That dilemma might appear more real to the developers of another program I saw at the conference, which assimilates the events in a naval battle and in real time determines how to react. I saw simulated video of automated weapons deployments to defend a battleship against an aerial attack. You can draw a line from that to SkyNet a lot more readily than with the poker bot, although it is no more conscious or unstable. It’s just in charge of something more serious than a pile of gaming chips. So, of course, is the software inside a power station, a fly-by-wire-plane, and a pacemaker.
I’ll be unpacking the takeaways from the conference more in future blog entries. Stay tuned.