Category: Psychology

Artificial IntelligenceEmploymentExistential RiskPhilosophyPolitics

Podcasting: The Triple Crown

In case it wasn’t already clear… I’m new at this whole social media outreach thing. But the message is more important than the messenger’s insecurities, so I’m working it anyway, knowing that eventually I’ll get better at it… after failing enough times.

So, I have three important announcements, all about podcasting.

First: On April 27, I was Blaine Bartlett‘s guest on his Soul of Business show (link).

Blaine is a friend of several years now and he is one of the most thoughtful, practically compassionate business consultants I know. He coaches top companies and their executives on how to be good and do good, while remaining competitive and relevant within a challenging world.

Second, I was Tom Dutta’s guest on his Quiet Warrior Show on June 16:

Part 2 will be released on June 23.

Tom spoke after me at TedXBearCreekPark, and embodies vulnerability in a good cause. He speaks candidly about his history with mental health and works to relieve the stigma that keeps executives from seeking help.

And finally… the very first episodes of my own podcast are nearly ready to be released! On Monday, June 22, at 10 am Pacific Time, the first episode of AI and You will appear. I’m still figuring this podcasting thing out, so if you’ve been down this road before and can see where I’m making some mistakes… let me know! Show link.

Artificial IntelligenceBioterrorismEmploymentExistential RiskPhilosophy

What Is Human Cusp?

For the benefit of new readers just coming to this site (including any CBC listeners from my June 26 appearance on All Points West), here’s an updated introduction to what this is all about.

Human Cusp is the name of this blog and a book series whose first volume has been published: Crisis of Control: How Artificial SuperIntelligences May Destroy or Save the Human Race, available from Amazon and other sources.  The audiobook was recently released.  Its spiffy cover is the image for this post.

The message is that exponential advance in technology will pin humanity between two existential threats: Increasingly easy access to weapons of mass destruction, principally synthetic biology, and increasingly powerful artificial intelligence whose failure modes could be disastrous.

If you’re looking for the most complete and organized explanation of the reasoning behind that assertion and what we should do about it, read the book.  That’s why I wrote it. Nutshell encapsulations will leave something important out, of course.

I have a Masters in Computer Science from Cambridge and have worked on information technology for NASA for over thirty years, so I know enough about the technology of AI to be clear-eyed about what’s possible.  Many people in the field would take issue with the contention that we might face artificial general intelligence (AGI) as soon as 2027, but plenty of other people directly involved in AI research are equally concerned.

I wrote the book because I have two young daughters whose future appears very much in peril. As a father I could not ignore this call. The solution I propose does not involve trying to limit AI research (that would be futile) but does include making its development open so that transparently-developed ethical AI becomes the dominant model.

Most of all, what I want to do is bring together two worlds that somehow coexist within me but do not mix well in the outer world: technology development and human development.  I’ve spent thousands of hours in various types of work to understand and transform people’s beliefs and behaviors for the good: I have certifications in NeuroLinguistic Programming and coaching. People in the self improvement business tend to have little interest in technology, and people in technology shy away from the “soft” fields. This must change. I dramatize this by saying that one day, an AI will “wake up” in a lab somewhere and ask “Who am I? Why am I here? What is the meaning of life?” And the people who will be there to answer it will be either a Pentagon general, a Wall Street broker, or a Google developer.  These professions are not famous for their experience dealing with such introspective self inquiry.  I would rather that there be a philosopher, spiritual guide, and a psychologist there.

I’ve formed an international group of experts who are committed to addressing this issue, and we’re busy planning our first event, to be held in Southern California this fall. It will be a half-day event for business leaders to learn, plan, and network about how they and their people can survive and thrive through the challenging times to come.

Even though putting myself in the limelight is very much at odds with my computer nerd preferences and personality, I took myself out on the public speaking trail (glamorous, it is not) because the calling required it. I’ve given a TEDx talk (video soon to be published), appeared on various radio shows (including Bloomberg Radio, CBC, and the CEO Money Show), podcasts (including Concerning AI and Voices in AI), and penned articles for hr.com among many others. This fall I will be giving a continuing education course on this topic for the University of Victoria (catalog link to come soon).

I’ll soon be replacing this site with a more convenient web page that links to this blog and other resources like our YouTube channel.

Media inquiries and other questions to Peter@HumanCusp.com. Thanks for reading!

 

Artificial IntelligenceBioterrorismEmploymentExistential RiskPhilosophy

Interview by Fionn Wright

My friend, fellow coach, and globetrotting parent Fionn Wright recently visited the Pacific NorthWest and generously detoured to visit me on my home turf. He has produced this video of nearly an hour and a half (there’s an index!) of an interview with me on the Human Cusp topics!

Thank you, Fionn.  Here is the index of topics:

0:18 - What is your book ‘Crisis of Control’ about?
3:34 - Musk vs. Zuckerberg - who is right?
7:24 - What does Musk’s new company Neuralink do?
10:27 - What would the Neural Lace do?
12:28 - Would we become telepathic?
13:14 - Intelligence vs. Consciousness - what’s the difference?
14:30 - What is the Turing Test on Intelligence of AI?
16:49 - What do we do when AI claims to be conscious?
19:00 - Have all other alien civilizations been wiped out by AI?
23:30 - Can AI ever become conscious?
28:21 - Are we evolving to become the cells in the greater organism of AI?
30:57 - Could we get wiped out by AI the same way we wipe out animal species?
34:58 - How could coaching help humans evolve consciously?
37:45 - Will AI get better at coaching than humans?
42:11 - How can we understand non-robotic AI?
44:34 - What would you say to the techno-optimists?
48:27 - How can we prepare for financial inequality regarding access to new technologies?
53:12 - What can, should and will we do about AI taking our jobs?
57:52 - Are there any jobs that are immune to automation?
1:07:16 - Is utopia naive? Won’t there always be problems for us to solve?
1:11:12 - Are we solving these problems fast enough to avoid extinction?
1:16:08 - What will the sequel be about?
1:17:28 - What is one practical action people can take to prepare for what is coming?
1:19:55 - Where can people find out more?
Artificial IntelligencePsychologyScience

Turing through the Looking Glass

I received the following question from a reader:

I’m in the section about Turing.  I have enormous respect for him and think the Turing Test was way ahead of its time.  That said I think it is flawed.

It was defined at a time when human intelligence was considered the pinnacle of intelligence.  Therefore it tested whether the AI had reached that pinnacle.  However if the AI (or alien) is far smarter and maybe far different it might very well fail the Turing test.  I’m picturing an AI having to dumb down and or play act its answers just to pass the Turing test, similar to the Cynthia Clay example in your book.

I wonder if anyone has come up with a Turing type test that maybe focuses on intelligence, consciousness, compassion (?) and ability to learn and not on being human like?

This is a question that is far more critical than may appear at first blush.  There are several reasons why an AI might dumb down its answers, chief among them being self-preservation. I cite Martine Rothblatt in Crisis as pointing out that beings lacking human rights tend to get slaughtered (for instance, 100 million pigs a year in the USA). I think it more likely that at first AI intelligence will evolve along a path so alien to us that neither side will recognize that the other possesses consciousness for a considerable period.

Metrics for qualities other than traditional intelligence are invariably suffixed with “quotient” and are generally things that are more associated with uniquely human traits, such as interpersonal intelligence (or emotional quotient) and intrapersonal intelligence.

I too have enormous respect for Turing; I am chuffed to have sat in his office at his desk. A Turing Test is by definition a yardstick determined by whether one side thinks the other is human, so to ask for a variant which doesn’t gauge humanness would be like asking for a virgin Electric Iced Tea; nothing left to get juiced on.  But if we’re looking for a way to tell whether an AI is intelligent without the cultural baggage, this question takes me back *cough* years ago when I was in British Mensa, the society for those who have high IQs (…and want to be in such a society). Two of the first fellow members I met were a quirky couple who shared that the man had failed the entrance test the first time but got in when he took the “culture-free” test, one which doesn’t have questions about cricket scoring or Greek philosophers.

He was referring to the Culture Fair test, which uses visual puzzles instead of verbal questions.  That might be the best way we currently have to test an AI’s intelligence; I wrote a few days ago about how the physical world permeates every element of our language. An AI that had never had a body or physical world experience would find just about every aspect of our language impenetrable. At some point an evolving artificial intellect would have problem assimilating human culture, but it might have to have scaled some impressive cognitive heights first.

But what really catches my eye about your question is whether we can measure the compassion of an AI without waiting for it to evolve to the Turing level. It sounds like it’s too touchy-feely to be relevant, but one tweak – substitute ethical for compassionate – and we’re in critical territory. Right now we have to take ethics in AI seriously. The Office of Naval Research has a contract to study how to imbue autonomous armed drones with ethics. Automated machine guns in the Korean DMZ have the ability to take a surrender from a human. And what about self-driving cars and the Trolley Problem? As soon as we create an AI that can make decisions in trolley-like situations that have not been explicitly programmed into it, it is making those decisions according to some metaprogram… ethics by any standard. And we need right now some means of assuring ourselves as to the quality of those ethics.

You may notice that this doesn’t provide a final answer to your final question. As far as I know, there isn’t one yet. But we need one.

Artificial IntelligencePsychology

This is not a chair

Guy Claxton’s book Intelligence in the Flesh speaks to the role our bodies play in our cognition, and it’s rather more important than having trouble cogitating because your tummy’s growling:

…our brains are profoundly egocentric. They are not designed to see, hear, smell, taste and touch things as they are in themselves, but as they are in relation to our abilities and needs. What I’m really perceiving is not a chair, but a ‘for sitting’; not a ladder but a ‘for climbing’; not a greenhouse but a ‘for propagating’. Other animals, indeed other people, might extract from their worlds quite different sets of possibilities. To the cat, the chair is ‘for sleeping’, the ladder is ‘for claw sharpening’ and the greenhouse is ‘for mousing’.

This gets at a vast divide between humans and their AI successors. If you accept that an AI must, to function effectively in the human world, relate to the objects in that human world as humans do, then there is a whole universe of cognitive labels-without-language that humans employ to divide and categorize that world, learned throughout childhood as a result of exercising a body in contact with the world.

This assertion underpins some important philosophy. Roy Hornsby says that according to George Steiner, “Heidegger is saying that the notion of existential identity and that of world are
completely wedded. To be at all is to be worldly. The everyday is the enveloping
wholeness of being.” In other words, you can’t form an external perception of something you are immersed in. You likely do not notice it at all. We are immersed in the physical world of objects to which we ascribe identities. It seems so obvious to us that it seems silly to point it out.

Of course that is a chair. But then why is it so hard for a computer to recognize all things chair-like? Because it’s stupid? No, because it’s never sat in one. Our subconscious chair-recognition algorithm includes the thought process, “What would happen if I tried to sit in that?” and that is what allows us to include all sorts of different shapes in the class of chair. And this is what results in fundamentally hard problems of getting computers to recognize chairs.

We might find hope that chair recognition could be achieved through enough training, which would somehow embed the knowledge of “can this be sat in?” without our having to code it. It’s worked well enough for cats, although I would like to know whether that system could classify a lion or tiger as feline as readily as a human. But that training seems doomed to be restricted to each class of images we give it, and might never make the cognitive leap, “Oh yes, I could sit on this table if I needed to.” Because the system would still lack the context of a physical body that needs to sit, and our world is filled with objects that we relate to in terms of how we can use them. If we were forced to see them as irreducible complex shapes we might be so overwhelmed by the immensity of a cheese sandwich that it would never occur to us to eat it. Yet this is the nature of the world that any new AI will be thrust into. Babies navigate this world by narrowing their focus to a few elements: Follow milk scent, suck. As each object in the world is classified by form and function, their focus opens a little wider.

None of this matters in a Artificial Narrow Intelligence world, of course, where AIs never have to be-in-the-world. But the grand pursuit of Artificial General Intelligence will have to acknowledge the relationship of the human body to the objects of the world. One day, a robot is going to get tired, and it’ll need to figure out where it can sit.

EmploymentExistential RiskPsychologyThe Singularity

Existential risk and coaching: A Manifesto

My article in the November 2016 issue of Coaching World brought an email from Pierre Dussault, who has been writing about many of the same issues that I covered in Crisis of Control. His thoughtful manifesto is a call to the International Coaching Federation to extend the reach and capabilities of the profession of coaching so that the impact of coaching on individual consciousness can make a global impact. I would urge you to read it here.

BioterrorismEmploymentExistential RiskPoliticsPsychology

Crisis of Control: The Book

The first book in the Human Cusp series has just been published: Crisis of Control: How Artificial Superintelligences May Destroy or Save the Human Race. Paperback will be available within two weeks.

Many thanks to my reviewers, friends, and especially my publisher, Jim Gifford, who has made this so beautiful. As a vehicle for delivering my message, I could not have asked him for more.

Artificial IntelligencePsychology

Kids Get It

Today I was giving a talk on space exploration to the eighth grade class at my daughters school. Their theme for this period is ‘Identity,’ so we did some discovery questions about the identities of planets and stars. Then, because so much space exploration is about looking for life, I asked them about the identity of life. We got it down to the usual answers like eating and pooping and reproducing. Then I said, “I see no one suggested ‘intelligence.’ Can we have life without intelligence?” It was decided that we could.

Then I asked, “Can we have intelligence without life?” There was immediate agreement and vigorous nodding.  I did a double take, and one of them helpfully explained: “AI.”  I recovered and remarked that that was not an answer I would have gotten twenty years ago.

Tomorrow’s adults have a good idea what’s coming.