Category: futurism

Artificial Intelligencefuturism

The Control Problem, Re-re-re-visited

Like moths to a flame, we cannot resist the siren call of the Control Problem, the name AI philosophers give to the question of how or whether we will be able to control AI as it gets more and more intelligent. (Not for nothing did I name my book Crisis of Control.) A reporter contacted me to ask for comment on a newly published paper from the Max Planck Institute. The paper makes a mathematical case for the impossibility of controlling a superintelligence through an extension of the famous (in computer science, at least) Halting Problem. This is a proof given to first-year computer science students to blow their minds about how to think about programs as data, and works by establishing a contradiction: Suppose a function exists that can tell whether a program (whose source code is passed as input to the function) is going to halt. Now create a program that calls that function with itself as input, and if the function returns true, loop forever, i.e., do not halt. This program halts if it doesn’t halt, and doesn’t halt if it does halt. Smoke comes out of the computer: Paradox Alert! Therefore, no such function can exist.

You might intuit that a lot of hay can made from that line of reasoning, and this is exactly the road that MPI went down. Their paper says, “Suppose we had a function that could tell whether a program (an AI) would harm humans. Now imagine a program that calls that function on itself and if the result is false, cause harm to humans.” Boom: Paradox Alert. Therefore it is impossible in general to tell whether a given program will cause harm to humans.

That’s a narrow conclusion to hang a large amount of philosophical weight on, but the paper’s authors don’t mind going there. They invoke the AI boxing problem and the story of the Monkey’s Paw to make sure we are aware of the consequences of not being able to guarantee control of AI. This commentary was picked up by the reporter who contacted me. You can see the resulting article in Lifewire.

I supplied more input than they had room to quote, of course. I was quoted fairly, and not taken out of context, but for you, here’s the full text of what I gave the reporter:

The paper you cite extends a venerable computer science proof to the theoretical conclusion that it is impossible to prove that a sufficiently advanced computer program couldn’t harm humanity. That doesn’t mean they’ve proved that advanced AIs will harm humanity! Just that we have no assurances that they won’t. Much as when voting for a candidate for President, we have no guarantee that he won’t foment an insurrection at the end of his term.

Controlling AI is currently a quality assurance problem: AI is a branch of computer science and its products are software; unpredictability in its behavior is what we call bugs. There is a long-established discipline for testing software to find them. As AI becomes increasingly complex, its operational modes become so varied as to defy comprehensive testing. Will a self-driving vehicle start learning about the psychology of children because it needs to predict whether they will jump in front of the car? Will we need to teach that car the physics of flammable liquids so it can decide whether it can convey its passengers to safety if it encounters an overturned tanker in the road? The range of possible behavior approaches infinity. We cannot expect to keep testing manageable by limiting the possible knowledge and behavior of an AI because it is precisely that unbounded knowledge that will allow it to do what we want.

We cannot, ultimately, ensure the controllability of AI any more than we can ensure that of our children. We raise them right and hope for the best; so far they have not destroyed the world. To raise them well we need a better understanding of ethics; if we can’t clean  our own house, what code are we supposed to ask AI to follow?

The problems in controlling AI right now are those of managing any other complex software; if we have a bug, what will it do, send grandma an electric bill for a million dollars? Or will image tagging software decide that African Americans are gorillas, as Google Photos did? These bugs are not the kind of uncontrollability that the paper’s authors or your readers are interested in. They want to know if and when AI will develop agency, or purposes that are clearly at odds with what its creators intended. That is known as the value alignment problem in artificial intelligence, and it does not require the AI become self-aware to be a problem. Nick Bostrom’s hypothetical paper-clip maximizer AI does not have to be conscious to wipe out the human race. But it does require a level of sophistication we do not currently know how to create. Most experts think we are decades away from that ability; your readers need not panic. We do, however, think that it is worth addressing the issue now, because the solution may also take decades to prepare.

What do you think about whether we could or should control superintelligent AI? Comment on this thread or use our contact form and maybe I can answer your question on my podcast.

“Machines take me by surprise with great frequency. This is largely because I do not do sufficient calculation to decide what to expect them to do.”

— Alan Turing

Artificial IntelligencefuturismTechnology

024 – The Biggest Question About AGI

https://www.podbean.com/media/share/pb-ujcbv-f271cd

This and all episodes at: https://aiandyou.net/ .

 

We tackle the most important question about Artificial General Intelligence – When Will It Happen? Everyone really wants to know, but no one has a clue.  Estimates range from 5 to 500 years. So why talk about it? I talk about how this question was raised in a presentation and what it means to me and all of us.

We might not be able to get a date, but we’ll explore why it’s such a hard question and see what useful questions we can get out of it.

All that and our usual look at today’s headlines in AI.

Transcript and URLs referenced at HumanCusp Blog.

AGI

 

Artificial IntelligencefuturismTechnology

023 – Guest: Pamela McCorduck, AI Historian, part 2

https://www.podbean.com/media/share/pb-2umm5-f259e4

This and all episodes at: https://aiandyou.net/ .

 

Every Johnson should have a Boswell, and the entire artificial intelligence field has Pamela McCorduck as its scribe. Part historian, part humorist, part raconteuse, her books romp through the history and characters of AI as both authoritative record and belles-lettres. Machines Who Think (1979, 2003) and her recent sequel This Could Be Important (2019) help understand the who, what, and why of where AI has come from.

In the second half of this interview, we talk about changes in the experience of women in computing, C. P. Snow’s “Two Cultures”, and the interaction between AI and the humanities, along with more tales of its founding fathers.

All that and our usual look at today’s headlines in AI.

Transcript and URLs referenced at HumanCusp Blog.

Pamela McCorduck

 

Artificial IntelligencefuturismTechnology

022 – Guest: Pamela McCorduck, AI Historian

https://www.podbean.com/media/share/pb-5a3ez-f1ed50

This and all episodes at: https://aiandyou.net/ .

 

Every Johnson should have a Boswell, and the entire artificial intelligence field has Pamela McCorduck as its scribe. Part historian, part humorist, part raconteuse, her books romp through the history and characters of AI as both authoritative record and belles-lettres. Machines Who Think (1979, 2003) and her recent sequel This Could Be Important (2019) help understand the who, what, and why of where AI has come from.

In this interview, we talk about the boom-bust cycle of AI, why the founders of the field thought they could crack the problem of thought in a summer, and the changes in thinking about intelligence since the early days.

All that and our usual look at today’s headlines in AI.

Transcript and URLs referenced at HumanCusp Blog.

Pamela McCorduck

 

Artificial IntelligencefuturismTechnology

021 – Guest: David Wood, Futurist, part 2

https://www.podbean.com/media/share/pb-y2bxm-f0cd40

This and all episodes at: https://aiandyou.net/ .

 

How do you drive a community of futurists? David Wood was one of the pioneers of the smartphone industry, co-founding Symbian in 1998. He is now an independent futurist consultant, speaker and writer. As Chair of the London Futurists, he has hosted over 200 public discussions about technoprogressive topics. He is the author or lead editor of nine books, including Smartphones for All, The Abolition of Aging, Transcending Politics, and Sustainable Superabundance.

In the second half of our interview, we talk about OpenAI, economic fairness with the AI dividend, how building an ecosystem with feedback cycles addresses disruption, and how you can participate in shaping the future.

Transcript and URLs referenced at HumanCusp Blog.

David Wood

 

Artificial IntelligencefuturismTechnology

020 – Guest: David Wood, Futurist

https://www.podbean.com/media/share/pb-u7qnm-f03b1e

This and all episodes at: https://aiandyou.net/ .

 

How do you drive a community of futurists? David Wood was one of the pioneers of the smartphone industry, co-founding Symbian in 1998. He is now an independent futurist consultant, speaker and writer. As Chair of the London Futurists, he has hosted over 200 public discussions about technoprogressive topics. He is the author or lead editor of nine books, including Smartphones for All, The Abolition of Aging, Transcending Politics, and Sustainable Superabundance.

In part 1 of our interview, we talk about David’s singularitarian philosophy, the evolution and impact of Deep Learning, and his SingularityNET infrastructure for AI interoperation.

Transcript and URLs referenced at HumanCusp Blog.

David Wood