Category: Science

Artificial IntelligenceEmploymentExistential RiskPhilosophyPolitics

Podcasting: The Triple Crown

In case it wasn’t already clear… I’m new at this whole social media outreach thing. But the message is more important than the messenger’s insecurities, so I’m working it anyway, knowing that eventually I’ll get better at it… after failing enough times.

So, I have three important announcements, all about podcasting.

First: On April 27, I was Blaine Bartlett‘s guest on his Soul of Business show (link).

Blaine is a friend of several years now and he is one of the most thoughtful, practically compassionate business consultants I know. He coaches top companies and their executives on how to be good and do good, while remaining competitive and relevant within a challenging world.

Second, I was Tom Dutta’s guest on his Quiet Warrior Show on June 16:

Part 2 will be released on June 23.

Tom spoke after me at TedXBearCreekPark, and embodies vulnerability in a good cause. He speaks candidly about his history with mental health and works to relieve the stigma that keeps executives from seeking help.

And finally… the very first episodes of my own podcast are nearly ready to be released! On Monday, June 22, at 10 am Pacific Time, the first episode of AI and You will appear. I’m still figuring this podcasting thing out, so if you’ve been down this road before and can see where I’m making some mistakes… let me know! Show link.

Artificial IntelligenceScienceTechnology

Rod Janz and the Vancouver Get Inspired Talks Podcast

Hello!  I’m delighted to report here about a new interview I’ve given that has just been published by the accomplished Rod Janz, owner of the business/lifestyle site FuelRadio, and podcaster to the up-and-coming Vancouver Get Inspired Talks.

Rod and I spoke together recently about my mission with Human Cusp, and he’s done a fantastic job in editing and producing that conversation for YouTube and Soundcloud. It’s both a personal history of how I came to be doing this, and a tour of some of the most impactful themes of my message.

 

Artificial IntelligenceBioterrorismEmploymentExistential RiskPhilosophy

What Is Human Cusp?

For the benefit of new readers just coming to this site (including any CBC listeners from my June 26 appearance on All Points West), here’s an updated introduction to what this is all about.

Human Cusp is the name of this blog and a book series whose first volume has been published: Crisis of Control: How Artificial SuperIntelligences May Destroy or Save the Human Race, available from Amazon and other sources.  The audiobook was recently released.  Its spiffy cover is the image for this post.

The message is that exponential advance in technology will pin humanity between two existential threats: Increasingly easy access to weapons of mass destruction, principally synthetic biology, and increasingly powerful artificial intelligence whose failure modes could be disastrous.

If you’re looking for the most complete and organized explanation of the reasoning behind that assertion and what we should do about it, read the book.  That’s why I wrote it. Nutshell encapsulations will leave something important out, of course.

I have a Masters in Computer Science from Cambridge and have worked on information technology for NASA for over thirty years, so I know enough about the technology of AI to be clear-eyed about what’s possible.  Many people in the field would take issue with the contention that we might face artificial general intelligence (AGI) as soon as 2027, but plenty of other people directly involved in AI research are equally concerned.

I wrote the book because I have two young daughters whose future appears very much in peril. As a father I could not ignore this call. The solution I propose does not involve trying to limit AI research (that would be futile) but does include making its development open so that transparently-developed ethical AI becomes the dominant model.

Most of all, what I want to do is bring together two worlds that somehow coexist within me but do not mix well in the outer world: technology development and human development.  I’ve spent thousands of hours in various types of work to understand and transform people’s beliefs and behaviors for the good: I have certifications in NeuroLinguistic Programming and coaching. People in the self improvement business tend to have little interest in technology, and people in technology shy away from the “soft” fields. This must change. I dramatize this by saying that one day, an AI will “wake up” in a lab somewhere and ask “Who am I? Why am I here? What is the meaning of life?” And the people who will be there to answer it will be either a Pentagon general, a Wall Street broker, or a Google developer.  These professions are not famous for their experience dealing with such introspective self inquiry.  I would rather that there be a philosopher, spiritual guide, and a psychologist there.

I’ve formed an international group of experts who are committed to addressing this issue, and we’re busy planning our first event, to be held in Southern California this fall. It will be a half-day event for business leaders to learn, plan, and network about how they and their people can survive and thrive through the challenging times to come.

Even though putting myself in the limelight is very much at odds with my computer nerd preferences and personality, I took myself out on the public speaking trail (glamorous, it is not) because the calling required it. I’ve given a TEDx talk (video soon to be published), appeared on various radio shows (including Bloomberg Radio, CBC, and the CEO Money Show), podcasts (including Concerning AI and Voices in AI), and penned articles for hr.com among many others. This fall I will be giving a continuing education course on this topic for the University of Victoria (catalog link to come soon).

I’ll soon be replacing this site with a more convenient web page that links to this blog and other resources like our YouTube channel.

Media inquiries and other questions to Peter@HumanCusp.com. Thanks for reading!

 

Artificial IntelligenceBioterrorismEmploymentExistential RiskPhilosophy

Interview by Fionn Wright

My friend, fellow coach, and globetrotting parent Fionn Wright recently visited the Pacific NorthWest and generously detoured to visit me on my home turf. He has produced this video of nearly an hour and a half (there’s an index!) of an interview with me on the Human Cusp topics!

Thank you, Fionn.  Here is the index of topics:

0:18 - What is your book ‘Crisis of Control’ about?
3:34 - Musk vs. Zuckerberg - who is right?
7:24 - What does Musk’s new company Neuralink do?
10:27 - What would the Neural Lace do?
12:28 - Would we become telepathic?
13:14 - Intelligence vs. Consciousness - what’s the difference?
14:30 - What is the Turing Test on Intelligence of AI?
16:49 - What do we do when AI claims to be conscious?
19:00 - Have all other alien civilizations been wiped out by AI?
23:30 - Can AI ever become conscious?
28:21 - Are we evolving to become the cells in the greater organism of AI?
30:57 - Could we get wiped out by AI the same way we wipe out animal species?
34:58 - How could coaching help humans evolve consciously?
37:45 - Will AI get better at coaching than humans?
42:11 - How can we understand non-robotic AI?
44:34 - What would you say to the techno-optimists?
48:27 - How can we prepare for financial inequality regarding access to new technologies?
53:12 - What can, should and will we do about AI taking our jobs?
57:52 - Are there any jobs that are immune to automation?
1:07:16 - Is utopia naive? Won’t there always be problems for us to solve?
1:11:12 - Are we solving these problems fast enough to avoid extinction?
1:16:08 - What will the sequel be about?
1:17:28 - What is one practical action people can take to prepare for what is coming?
1:19:55 - Where can people find out more?
Artificial IntelligenceScienceTranshumanism

Bullying Beliefs

In the otherwise thought provoking and excellent book, “Heartificial Intelligence: Embracing our Humanity to Maximize Machines”, John C. Havens uncharacteristically misses the point of one of the scientists he reports on:

Jürgen Schmidhuber is a computer scientist known for his humor, artwork, and expertise in artificial intelligence. As part of a recent speech at TEDxLausanne, he provides a picture of technological determinism similar to [Martine] Rothblatt’s, describing robot advancement beyond human capabilities as inevitable. […] [H]e observes that his young children will spend a majority of their lives in a world where the emerging robot civilization will be smarter than human beings. Near the end of his presentation he advises the audience not to think with an “us versus them” mentality regarding robots, but to “think of yourself and of humanity in general as a small stepping stone, not the last one, on the path of the universe towards more and more unfathomable complexity. Be content with that little role in the grand scheme of things.”

It’s difficult to comprehend the depths of Schmidhuber’s condescension with this statement. Fully believing that he is building technology that will in one sense eradicate humanity, he counsels nervous onlookers to embrace this decimation. […] [T]he inevitability of our demise is assured, but at least our tiny brains provide some fodder for the new order ruling our dim-witted progeny. Huzzah! Be content!

This is not a healthy attitude.

But this is not Schmidhuber’s attitude. It is more of a coping skill for facing the inevitable and seeing a grand scheme to the unfolding of the universe.

In The HitchHiker’s Guide to the Galaxy, Zaphod Beeblebrox is tortured by placing him in the Total Perspective Vortex, which reduces its victim to blubbering insanity by showing them how insignificant they are on the scale of the universe. Unfortunately it fails in Beeblebrox’s case because his ego is so huge that he comes away reassured that he was a “really cool guy.” Havens is so desperate to avoid the perspective of humanity’s place in the universe that he mistakes or misstates Schmidhuber’s position as embracing the eradication of humanity. Schmidhuber said nothing of the sort, but foresaw a co-evolution of mankind and AI where the latter would surpass our intellectual capabilities. There is nothing condescending in this.

When I give a talk, someone will invariably raise what amounts to human exceptionalism. “When we’ve created machines that outclass us intellectually, what will humans do? What will be the point of living?” I usually reply with an analogy: Imagine that all the years of SETI and Project Ozma and HRMS  have paid off, and we are visited by an alien race. Their technological superiority is not in doubt – they built the spaceships to get to us, after all – and like the aliens of Close Encounters of the Third Kind, they are also evolved emotionally, philosophically, compassionately, and spiritually. Immediately upon landing, they show us how to cure cancer, end aging, and reach for the stars. Do we reject this cornucopia because we feel inferior to these visitors? Is our collective ego so large and fragile that we would rather live without these advances than relinquish the top position on the medal winners’ podium of sentient species?

Accepting a secondary rank is a role that’s understood by many around the world. I have three citizenships: British, American, Canadian. As an American, of course, you’re steeped in countless numerical examples of superiority from GDP to – Hello? We landed on the Moon. Growing up in Britain, we were raised in the shadow of the Empire on a diet of past glories that led us to believe that we were still top dog if you squinted a bit, and certainly if you had any kind of a retrospective focus, which is why the British take every opportunity possible to remind Americans of their quantity of history.  But as a Canadian, you have to accept that you could only be the global leader in some mostly intangible ways such as politeness, amount of fresh water per capita, best poutine, etc.

Most of the world outside the USA already knows what it’s like to share the planet with a more powerful race of hopefully benign intent. So they may find it easier to accept a change in the pecking order.

 

Artificial IntelligenceScience

How many piano tuners are there in Chicago?

One of the chapters in Crisis of Control is on the Fermi Paradox, a fiendishly simply-stated problem with existential ramifications. That kind of simplification of the complex was the stock-in-trade of physicist Enrico Fermi, a man who could toss scraps of paper into the air when the atomic bomb test exploded and calculate in seconds an estimate of its yield that rivaled the official figures released days later. He taught his students to think the same way with this question: “How many piano tuners are there in Chicago?” No Googling. No reference books. Do your best with what you know. Go.

This is one of those questions where “Show your work” is the only possible way to evaluate the answer. The lazy ones will throw a dart at a mental board and say, “X,” and when asked how come, shrug. The way to solve this is to break it down into an equation containing factors that can be more readily estimated.  If we knew:

  • P – The population of Chicago
  • f – The number of pianos per person
  • t – The number of times a piano is tuned per year
  • H – The number of hours it takes to tune a piano
  • W – The number of hours per year a piano tuner works

then the number of piano tuners in Chicago is: P * f * t * H / W .  Here, let’s walk through this:

  • P * f gives the number of pianos in Chicago, call that N. P and f are each easier to estimate than how many pianos there are in a city.
  • N * t gives the number of piano tunings per year in Chicago, call that T.
  • T * H gives the number of hours spent tuning pianos per year in Chicago, call that Y.
  • Y / W gives the number of piano tuners it takes to provide that service. QED.

Of course, you could look at those figures and say, wait, I don’t even know the population of Chicago, much less how many hours a piano tuner works.  But it’s easier to make a good guess. To get f, you can go off your personal experience of how many friends’ houses you’ve seen with pianos, make a correction for the number of pianos in institutions of some kind (theaters, schools, etc), and at each stage, add in confidence limits of how far off you think you could be.

This process is what leads to the most important math in the Fermi Paradox chapter in Crisis, the Drake Equation:

N = R* · fₚ · nₑ · fₗ · fᵢ · fᶜ · L

where:

N = The number of civilizations in the Milky Way galaxy
(ours) whose electromagnetic emissions are detectable
(i.e., planets inhabited by aliens sending radio signals)
R* = The rate of formation of stars suitable for the
development of intelligent life
fₚ = The fraction of those stars with planetary systems
nₑ = The number of planets, per solar system, with an
environment suitable for life
fₗ = The fraction of suitable planets on which life actually
appears
fᵢ = The fraction of life-bearing planets on which
intelligent life emerges
fᶜ = The fraction of civilizations that develop a
technology that releases detectable signs of their
existence into space
L = The length of time such civilizations release
detectable signals into space

And that gives us a way of estimating how many intelligent civilizations there are in the galaxy right now, from quantities that we can estimate or measure independently.  Of course, the big question is, why haven’t we found any such civilizations yet when the calculations suggest N should be much larger than 1?  But NASA thinks it won’t be too long before that happens. And when we find them we can ask them how many piano tuners they have.

Artificial IntelligencePsychologyScience

Turing through the Looking Glass

I received the following question from a reader:

I’m in the section about Turing.  I have enormous respect for him and think the Turing Test was way ahead of its time.  That said I think it is flawed.

It was defined at a time when human intelligence was considered the pinnacle of intelligence.  Therefore it tested whether the AI had reached that pinnacle.  However if the AI (or alien) is far smarter and maybe far different it might very well fail the Turing test.  I’m picturing an AI having to dumb down and or play act its answers just to pass the Turing test, similar to the Cynthia Clay example in your book.

I wonder if anyone has come up with a Turing type test that maybe focuses on intelligence, consciousness, compassion (?) and ability to learn and not on being human like?

This is a question that is far more critical than may appear at first blush.  There are several reasons why an AI might dumb down its answers, chief among them being self-preservation. I cite Martine Rothblatt in Crisis as pointing out that beings lacking human rights tend to get slaughtered (for instance, 100 million pigs a year in the USA). I think it more likely that at first AI intelligence will evolve along a path so alien to us that neither side will recognize that the other possesses consciousness for a considerable period.

Metrics for qualities other than traditional intelligence are invariably suffixed with “quotient” and are generally things that are more associated with uniquely human traits, such as interpersonal intelligence (or emotional quotient) and intrapersonal intelligence.

I too have enormous respect for Turing; I am chuffed to have sat in his office at his desk. A Turing Test is by definition a yardstick determined by whether one side thinks the other is human, so to ask for a variant which doesn’t gauge humanness would be like asking for a virgin Electric Iced Tea; nothing left to get juiced on.  But if we’re looking for a way to tell whether an AI is intelligent without the cultural baggage, this question takes me back *cough* years ago when I was in British Mensa, the society for those who have high IQs (…and want to be in such a society). Two of the first fellow members I met were a quirky couple who shared that the man had failed the entrance test the first time but got in when he took the “culture-free” test, one which doesn’t have questions about cricket scoring or Greek philosophers.

He was referring to the Culture Fair test, which uses visual puzzles instead of verbal questions.  That might be the best way we currently have to test an AI’s intelligence; I wrote a few days ago about how the physical world permeates every element of our language. An AI that had never had a body or physical world experience would find just about every aspect of our language impenetrable. At some point an evolving artificial intellect would have problem assimilating human culture, but it might have to have scaled some impressive cognitive heights first.

But what really catches my eye about your question is whether we can measure the compassion of an AI without waiting for it to evolve to the Turing level. It sounds like it’s too touchy-feely to be relevant, but one tweak – substitute ethical for compassionate – and we’re in critical territory. Right now we have to take ethics in AI seriously. The Office of Naval Research has a contract to study how to imbue autonomous armed drones with ethics. Automated machine guns in the Korean DMZ have the ability to take a surrender from a human. And what about self-driving cars and the Trolley Problem? As soon as we create an AI that can make decisions in trolley-like situations that have not been explicitly programmed into it, it is making those decisions according to some metaprogram… ethics by any standard. And we need right now some means of assuring ourselves as to the quality of those ethics.

You may notice that this doesn’t provide a final answer to your final question. As far as I know, there isn’t one yet. But we need one.

Artificial IntelligenceExistential RiskScienceTechnologyThe Singularity

Rebuttal to “The AI Misinformation Epidemic”

Anyone in a field of expertise can agree that the press doesn’t cover them as accurately as they would like. Sometimes that falls within the limits of what a layperson can reasonably be expected to learn about a field in a thirty minute interview; sometimes it’s rank sensationalism.  Zachary Lipton, a PhD candidate at UCSD, takes aim at the press coverage of AI in The AI Misinformation Epidemic, and its followup, but uses a blunderbuss and takes out a lot of innocent bystanders.

He rants against media sensationalism and misinformation but provides no examples
to debate other than a Vanity Fair article about Elon Musk. He goes after Kurzweil in particular but doesn’t say what specifically he disagrees with Kurzweil on. He says that Kurzweil’s date
for the Singularity is made up but Kurzweil has published his reasoning
and Lipton doesn’t say what data or assertions in that reasoning he
disagrees with. He says the Singularity is a nebulous concept, which is
bound to be true in the minds of many laypeople who have heard the term
but he references Kurzweil immediately adjacent to that assertion and
yet doesn’t say what about Kurzweil’s vision is wrong or unfounded, dismissing it instead as “religion,” which apparently means he doesn’t have to be specific.

He
says that there are many people making pronouncements in the field who
are unqualified to do so but doesn’t name anyone aside from Kurzweil and
Vanity Fair, nor does he say what credentials should be required to
qualify someone. Kurzweil, having invented the optical character reader
and music synthesizer, is not qualified?

He takes no
discernible stand on the issue of AI safety yet his scornful tone is
readily interpreted as poo-poohing not just utopianism but alarmism as well. That
puts him at odds with Stephen Hawking, whose academic credentials are
beyond question. For some reason, Nick Bostrom also comes in for attack in the comments, for no specified reason but with the implication that since he makes AI dangers digestible for the masses he is therefore sensationalist.  Perhaps that is why I reacted so much to this article.

There is of course hype, but it is hard to tell exactly where. In 2015 an article claiming that a computer would beat the world champion Go player within a year would be roundly dismissed as hype.  In 2009 any article asserting that self-driving vehicles would be ready for public roads within five years would have been overreaching. Kurzweil has a good track record of predictions, they just tend to be behind schedule. The point is, if an assertion about an existential threat turns out to be well founded but we ignore it because existential threats have always appeared over-dramatized, then it will be too late to say, “Oops, missed one.” We have to take this stuff seriously.

Artificial IntelligenceExistential RiskScienceTechnology

Elon Musk’s Billion-Dollar Crusade to Stop the A.I. Apocalypse

Vanity Fair describes a meeting between Elon Musk and Demis Hassabis, a leading creator of advanced artificial intelligence, which likely propelled Musk’s alarm about AI:

Musk explained that his ultimate goal at SpaceX was the most important project in the world: interplanetary colonization.

Hassabis replied that, in fact, he was working on the most important project in the world: developing artificial super-intelligence. Musk countered that this was one reason we needed to colonize Mars—so that we’ll have a bolt-hole if A.I. goes rogue and turns on humanity. Amused, Hassabis said that A.I. would simply follow humans to Mars.

This did nothing to soothe Musk’s anxieties (even though he says there are scenarios where A.I. wouldn’t follow).

Mostly about Musk, the article is replete with Crisis of Control tropes that are now playing out in the real world far sooner than even I had thought likely. Musk favors opening AI development and getting to super-AI before government or “tech elites” – even when the elites are Google or Facebook.