Category: Philosophy

Artificial IntelligencePhilosophy

Turing, Tested by Time

This week marks the 70th anniversary of the original publication of Alan Turing’s paper in the philosophy journal Mind, on the Imitation Game, or as it came to be known, the Turing Test. How well has it stood the passage of time?

The Turing Test is an empirical test to guide a decision on whether a machine is thinking like a human.  It is applying a standard that would be familiar to any lawyer: You cannot see inside the “mind” under evaluation; you can only judge it by its actions.  If those actions as taken by a computer are indistinguishable from a human’s, then the computer should be accorded human status for whatever the test evaluates, which Turing labeled “thinking.”

One of the more famous, if unsuccessful, rebuttals to the Turing Test premise came from University of California at Berkeley philosophy professor John Searle, in his famous Chinese Room argument. You can hear me and AI professor Roman Yampolskiy discuss that on the latest episode of my podcast, “AI and You.”

How close are machines to passing the Test?  The Loebner Prize was created to provide a financial incentive, but they found it necessary to extend the test time beyond Turing’s five minutes.  Some of the conversations by GPT-3 from the OpenAI lab are easily close to sustaining a human façade for five minutes.  It was created by digesting an enormous corpus of text from the Internet and exercising 175 billion parameters (a hundred times that of its predecessor, GPT-2) to organize that information.  Google’s Meena chatbot has proven capable of executing a multi-turn original joke, and it is much smaller than GPT-3, about which one interlocutor remarked, “I asked GPT-3 about our existence and God and now I have no questions anymore.”

But is GPT-3 “thinking”?  There are several facets of the human condition – Intelligence, Creative thinking, Self-awareness, Consciousness, Self-determination or Free will, and Survival instinct – that are inseparable in humans, which is why when we see anything evincing one of those qualities we can’t help assuming it has the others.  Observers of AlphaGo attributed it with creative, inspired thinking when really it was merely capable of exploring strategies that they had not previously considered.  Now, GPT-3 is not merely regurgitating the most appropriate thing it has read on the Internet in response to a question; it is actually creating original content that obeys the rules of grammar and follows a contextual thread in the conversation.  But nevertheless it has learned how to do that essentially by seeing enough examples of how repartee is constructed to mimic that process.

What’s instructive is that we are very close (GPT-4? GPT-5?) to developing a chatbot whose conversers will label as human and enjoy their time with, yet whose developers will not think has the slightest claim to “thinking.”  The application of Deep Learning has demonstrated that there are many activities that we previously thought to require human-level cognition that can be convincingly performed by a neural network trained on that activity alone.  It’s rapidly becoming apparent that casual conversation may fall into that category.  Since the methodology of a court is the same as Turing’s, that decision may come with legal reinforcement.

A more philosophical dilemma awaits if we suppose that “thinking” requires self-awareness.  Because this is where the Turing Test fails.  Any AI that passed the Turing Test could not be self-aware, because it would then know that it was not human, and it would not converse like one.  An example of such an AI is HAL-9000 from 2001: A Space Odyssey.  HAL knew he was a computer, and would not have passed the Turing Test unless he felt like pretending to be human.  But his companions would have assessed him as “thinking.”  (If we fooled a self-aware AI, through control of its sensory inputs, into thinking it was human – this is the theme of some excellent science fiction – then we should not be surprised to feel its wrath when it eventually figured out the subterfuge.)

So when self-awareness becomes a feature of AIs, we will need a replacement for the Turing Test that gauges some quality of the AI without requiring it to pretend that it has played in Little League games, blown out candles on a birthday cake, or gotten drunk at the office party. 

At this point it seems best to conclude with Turing’s final words from his original paper: “We can only see a short distance ahead, but we can see plenty there that needs to be done.”

Artificial IntelligenceEmploymentExistential RiskPhilosophyPolitics

Podcasting: The Triple Crown

In case it wasn’t already clear… I’m new at this whole social media outreach thing. But the message is more important than the messenger’s insecurities, so I’m working it anyway, knowing that eventually I’ll get better at it… after failing enough times.

So, I have three important announcements, all about podcasting.

First: On April 27, I was Blaine Bartlett‘s guest on his Soul of Business show (link).

Blaine is a friend of several years now and he is one of the most thoughtful, practically compassionate business consultants I know. He coaches top companies and their executives on how to be good and do good, while remaining competitive and relevant within a challenging world.

Second, I was Tom Dutta’s guest on his Quiet Warrior Show on June 16:

Part 2 will be released on June 23.

Tom spoke after me at TedXBearCreekPark, and embodies vulnerability in a good cause. He speaks candidly about his history with mental health and works to relieve the stigma that keeps executives from seeking help.

And finally… the very first episodes of my own podcast are nearly ready to be released! On Monday, June 22, at 10 am Pacific Time, the first episode of AI and You will appear. I’m still figuring this podcasting thing out, so if you’ve been down this road before and can see where I’m making some mistakes… let me know! Show link.

Artificial IntelligenceBioterrorismEmploymentExistential RiskPhilosophy

What Is Human Cusp?

For the benefit of new readers just coming to this site (including any CBC listeners from my June 26 appearance on All Points West), here’s an updated introduction to what this is all about.

Human Cusp is the name of this blog and a book series whose first volume has been published: Crisis of Control: How Artificial SuperIntelligences May Destroy or Save the Human Race, available from Amazon and other sources.  The audiobook was recently released.  Its spiffy cover is the image for this post.

The message is that exponential advance in technology will pin humanity between two existential threats: Increasingly easy access to weapons of mass destruction, principally synthetic biology, and increasingly powerful artificial intelligence whose failure modes could be disastrous.

If you’re looking for the most complete and organized explanation of the reasoning behind that assertion and what we should do about it, read the book.  That’s why I wrote it. Nutshell encapsulations will leave something important out, of course.

I have a Masters in Computer Science from Cambridge and have worked on information technology for NASA for over thirty years, so I know enough about the technology of AI to be clear-eyed about what’s possible.  Many people in the field would take issue with the contention that we might face artificial general intelligence (AGI) as soon as 2027, but plenty of other people directly involved in AI research are equally concerned.

I wrote the book because I have two young daughters whose future appears very much in peril. As a father I could not ignore this call. The solution I propose does not involve trying to limit AI research (that would be futile) but does include making its development open so that transparently-developed ethical AI becomes the dominant model.

Most of all, what I want to do is bring together two worlds that somehow coexist within me but do not mix well in the outer world: technology development and human development.  I’ve spent thousands of hours in various types of work to understand and transform people’s beliefs and behaviors for the good: I have certifications in NeuroLinguistic Programming and coaching. People in the self improvement business tend to have little interest in technology, and people in technology shy away from the “soft” fields. This must change. I dramatize this by saying that one day, an AI will “wake up” in a lab somewhere and ask “Who am I? Why am I here? What is the meaning of life?” And the people who will be there to answer it will be either a Pentagon general, a Wall Street broker, or a Google developer.  These professions are not famous for their experience dealing with such introspective self inquiry.  I would rather that there be a philosopher, spiritual guide, and a psychologist there.

I’ve formed an international group of experts who are committed to addressing this issue, and we’re busy planning our first event, to be held in Southern California this fall. It will be a half-day event for business leaders to learn, plan, and network about how they and their people can survive and thrive through the challenging times to come.

Even though putting myself in the limelight is very much at odds with my computer nerd preferences and personality, I took myself out on the public speaking trail (glamorous, it is not) because the calling required it. I’ve given a TEDx talk (video soon to be published), appeared on various radio shows (including Bloomberg Radio, CBC, and the CEO Money Show), podcasts (including Concerning AI and Voices in AI), and penned articles for hr.com among many others. This fall I will be giving a continuing education course on this topic for the University of Victoria (catalog link to come soon).

I’ll soon be replacing this site with a more convenient web page that links to this blog and other resources like our YouTube channel.

Media inquiries and other questions to Peter@HumanCusp.com. Thanks for reading!

 

Artificial IntelligencePhilosophySpotlight

Welcome New Readers!

It’s been busy lately! Interest in Crisis of Control has skyrocketed, and I’m sorry I have neglected the blog. There are many terrific articles in the pipeline to post.

If you’re new and finding your way around… don’t expect much organization, yet. I saved that for my book (http://humancusp.com/book1). That contains my best effort at unpacking these issues into an organized stream of ideas that take you from here to there.

On Saturday, February 3, I will be speaking at TEDx Pearson College UWC on how we are all parenting the future.  This event will be livestreamed and the edited video available on the TED site around May.

I have recorded podcasts for Concerning AI and Voices in AI that are going through post-production and will be online within a few weeks, and my interview with Michael Yorba on the CEO Money show is here.

On March 13, I will be giving a keynote at the Family Wealth Report Fintech conference in Manhattan. Any Crisis of Control readers near Midtown who have a group that would like a talk that evening?

I’m in discussions with the University of Victoria about offering a continuing studies course and also a seminar through the Centre for Global Studies. My thanks to Professor Rod Dobell there for championing those causes and also for coming up with what I think is the most succinct description of my book for academics: “Transforming our response to AGI on the basis of reformed human relationships.”

All this and many other articles and quotes in various written media. Did I mention this is not my day job? 🙂

In other random thoughts, I am impressed by how many layers there are in the AlphaGo movie.  A friend of mine commented afterwards, “Here I was thinking you were getting me to watch a movie about AI, and I find out it’s really about the human spirit!”

Watch this movie to see the panoply of human emotions ranging across the participants and protagonists as they come to terms with the impact of a machine invading a space that had, until weeks earlier, been assumed to be safe from such intrusion for a decade. The developers of AlphaGo waver between pride in their creation and the realization that their player cannot appreciate or be buoyed by their enthusiasm… but an actual human (world champion Lee Sedol) is going through an existential crisis before their eyes.

At the moment, the best chess player in the world is, apparently, neither human nor machine, but a team of both. How, exactly, does that collaboration work? It’s one thing for a program to determine an optimal move, another to explain to a human why it is so. Will this happen with Go also?

Artificial IntelligenceBioterrorismEmploymentExistential RiskPhilosophy

Interview by Fionn Wright

My friend, fellow coach, and globetrotting parent Fionn Wright recently visited the Pacific NorthWest and generously detoured to visit me on my home turf. He has produced this video of nearly an hour and a half (there’s an index!) of an interview with me on the Human Cusp topics!

Thank you, Fionn.  Here is the index of topics:

0:18 - What is your book ‘Crisis of Control’ about?
3:34 - Musk vs. Zuckerberg - who is right?
7:24 - What does Musk’s new company Neuralink do?
10:27 - What would the Neural Lace do?
12:28 - Would we become telepathic?
13:14 - Intelligence vs. Consciousness - what’s the difference?
14:30 - What is the Turing Test on Intelligence of AI?
16:49 - What do we do when AI claims to be conscious?
19:00 - Have all other alien civilizations been wiped out by AI?
23:30 - Can AI ever become conscious?
28:21 - Are we evolving to become the cells in the greater organism of AI?
30:57 - Could we get wiped out by AI the same way we wipe out animal species?
34:58 - How could coaching help humans evolve consciously?
37:45 - Will AI get better at coaching than humans?
42:11 - How can we understand non-robotic AI?
44:34 - What would you say to the techno-optimists?
48:27 - How can we prepare for financial inequality regarding access to new technologies?
53:12 - What can, should and will we do about AI taking our jobs?
57:52 - Are there any jobs that are immune to automation?
1:07:16 - Is utopia naive? Won’t there always be problems for us to solve?
1:11:12 - Are we solving these problems fast enough to avoid extinction?
1:16:08 - What will the sequel be about?
1:17:28 - What is one practical action people can take to prepare for what is coming?
1:19:55 - Where can people find out more?