Category: Employment

Artificial IntelligenceEmploymentExistential RiskPhilosophyPolitics

Podcasting: The Triple Crown

In case it wasn’t already clear… I’m new at this whole social media outreach thing. But the message is more important than the messenger’s insecurities, so I’m working it anyway, knowing that eventually I’ll get better at it… after failing enough times.

So, I have three important announcements, all about podcasting.

First: On April 27, I was Blaine Bartlett‘s guest on his Soul of Business show (link).

Blaine is a friend of several years now and he is one of the most thoughtful, practically compassionate business consultants I know. He coaches top companies and their executives on how to be good and do good, while remaining competitive and relevant within a challenging world.

Second, I was Tom Dutta’s guest on his Quiet Warrior Show on June 16:

Part 2 will be released on June 23.

Tom spoke after me at TedXBearCreekPark, and embodies vulnerability in a good cause. He speaks candidly about his history with mental health and works to relieve the stigma that keeps executives from seeking help.

And finally… the very first episodes of my own podcast are nearly ready to be released! On Monday, June 22, at 10 am Pacific Time, the first episode of AI and You will appear. I’m still figuring this podcasting thing out, so if you’ve been down this road before and can see where I’m making some mistakes… let me know! Show link.

Artificial IntelligenceBioterrorismEmploymentExistential RiskPhilosophy

What Is Human Cusp?

For the benefit of new readers just coming to this site (including any CBC listeners from my June 26 appearance on All Points West), here’s an updated introduction to what this is all about.

Human Cusp is the name of this blog and a book series whose first volume has been published: Crisis of Control: How Artificial SuperIntelligences May Destroy or Save the Human Race, available from Amazon and other sources.  The audiobook was recently released.  Its spiffy cover is the image for this post.

The message is that exponential advance in technology will pin humanity between two existential threats: Increasingly easy access to weapons of mass destruction, principally synthetic biology, and increasingly powerful artificial intelligence whose failure modes could be disastrous.

If you’re looking for the most complete and organized explanation of the reasoning behind that assertion and what we should do about it, read the book.  That’s why I wrote it. Nutshell encapsulations will leave something important out, of course.

I have a Masters in Computer Science from Cambridge and have worked on information technology for NASA for over thirty years, so I know enough about the technology of AI to be clear-eyed about what’s possible.  Many people in the field would take issue with the contention that we might face artificial general intelligence (AGI) as soon as 2027, but plenty of other people directly involved in AI research are equally concerned.

I wrote the book because I have two young daughters whose future appears very much in peril. As a father I could not ignore this call. The solution I propose does not involve trying to limit AI research (that would be futile) but does include making its development open so that transparently-developed ethical AI becomes the dominant model.

Most of all, what I want to do is bring together two worlds that somehow coexist within me but do not mix well in the outer world: technology development and human development.  I’ve spent thousands of hours in various types of work to understand and transform people’s beliefs and behaviors for the good: I have certifications in NeuroLinguistic Programming and coaching. People in the self improvement business tend to have little interest in technology, and people in technology shy away from the “soft” fields. This must change. I dramatize this by saying that one day, an AI will “wake up” in a lab somewhere and ask “Who am I? Why am I here? What is the meaning of life?” And the people who will be there to answer it will be either a Pentagon general, a Wall Street broker, or a Google developer.  These professions are not famous for their experience dealing with such introspective self inquiry.  I would rather that there be a philosopher, spiritual guide, and a psychologist there.

I’ve formed an international group of experts who are committed to addressing this issue, and we’re busy planning our first event, to be held in Southern California this fall. It will be a half-day event for business leaders to learn, plan, and network about how they and their people can survive and thrive through the challenging times to come.

Even though putting myself in the limelight is very much at odds with my computer nerd preferences and personality, I took myself out on the public speaking trail (glamorous, it is not) because the calling required it. I’ve given a TEDx talk (video soon to be published), appeared on various radio shows (including Bloomberg Radio, CBC, and the CEO Money Show), podcasts (including Concerning AI and Voices in AI), and penned articles for hr.com among many others. This fall I will be giving a continuing education course on this topic for the University of Victoria (catalog link to come soon).

I’ll soon be replacing this site with a more convenient web page that links to this blog and other resources like our YouTube channel.

Media inquiries and other questions to Peter@HumanCusp.com. Thanks for reading!

 

Artificial IntelligenceEmploymentPoliticsSpotlightTechnology

Human Cusp on the Small Business Advocate

Hello!  You can listen to my November 28 interview with Jim Blasingame on his Small Business advocate radio show in these segments:

Part 1:

Part 2:

Part 3:

 

Artificial IntelligenceBioterrorismEmploymentExistential RiskPhilosophy

Interview by Fionn Wright

My friend, fellow coach, and globetrotting parent Fionn Wright recently visited the Pacific NorthWest and generously detoured to visit me on my home turf. He has produced this video of nearly an hour and a half (there’s an index!) of an interview with me on the Human Cusp topics!

Thank you, Fionn.  Here is the index of topics:

0:18 - What is your book ‘Crisis of Control’ about?
3:34 - Musk vs. Zuckerberg - who is right?
7:24 - What does Musk’s new company Neuralink do?
10:27 - What would the Neural Lace do?
12:28 - Would we become telepathic?
13:14 - Intelligence vs. Consciousness - what’s the difference?
14:30 - What is the Turing Test on Intelligence of AI?
16:49 - What do we do when AI claims to be conscious?
19:00 - Have all other alien civilizations been wiped out by AI?
23:30 - Can AI ever become conscious?
28:21 - Are we evolving to become the cells in the greater organism of AI?
30:57 - Could we get wiped out by AI the same way we wipe out animal species?
34:58 - How could coaching help humans evolve consciously?
37:45 - Will AI get better at coaching than humans?
42:11 - How can we understand non-robotic AI?
44:34 - What would you say to the techno-optimists?
48:27 - How can we prepare for financial inequality regarding access to new technologies?
53:12 - What can, should and will we do about AI taking our jobs?
57:52 - Are there any jobs that are immune to automation?
1:07:16 - Is utopia naive? Won’t there always be problems for us to solve?
1:11:12 - Are we solving these problems fast enough to avoid extinction?
1:16:08 - What will the sequel be about?
1:17:28 - What is one practical action people can take to prepare for what is coming?
1:19:55 - Where can people find out more?
Artificial IntelligenceEmploymentExistential Risk

Why Elon Musk Is Right … Again

Less than a week after Elon Musk warned the National Association of Governors about the risks of artificial intelligence, he got in a very public dust-up with Mark Zuckerberg, who thought Musk was being “pretty irresponsible.” Musk retorted that Zuckerberg’s understanding of the topic was “limited.”

This issue pops up with such regularity as to bring joy to the copyright holders of Terminator images. But neither of these men is a dummy, and they can’t both be right… right?

We need to unpack this a little carefully. There is a short term, and a long term. In the short term (the next 10-20 years), while there will be many jobs lost to automation, there will be tremendous benefits wrought by AI, specifically Artificial Narrow Intelligence, or ANI. That’s the kind of AI that’s ubiquitous now; each instance of it solves some specific problem very well, often better than humans, but that’s all it does. This is of course true on the face of it of computers ever since they were invented, or there would have been no point; from the beginning they were better at taking square roots than a person with pencil and paper.

But now those skills include tasks like facial recognition and driving a car, two abilities that we cannot even explain adequately how we do them ourselves, but never mind; computers can be trained by showing them good and bad examples and they just figure it out. They can recognize faces better than humans now, and the day when they are better drivers than humans is not far off.

In the short term, then, the effects are unemployment on an unprecedented scale as 3.5 million people who drive vehicles for a living in the USA alone are expected to be laid off. The effects extend to financial analysts making upwards of $400k/year, whose jobs can now be largely automated. Two studies show that about 47% of work functions are expected to be automated in the short term. (That’s widely misreported as 47% of jobs being eliminated with the rest left unmolested; actually, most jobs would be affected to varying degrees, averaging to 47%.) Mark Cuban agrees.

But, there will be such a cornucopia bestowed upon us by the ANIs that make this happen that we should not impede this progress, say their proponents.  Cures for diseases, dirty risky jobs given to machines, and wealth created in astronomical quantities, sufficient to take care of all those laid-off truckers.

That is true, but it requires that someone connect the wealth generated by the ANIs with the laid-off workers, and we’ve not been good at that historically. But let’s say we figure it out, the political climate swings towards Universal Basic Income, and in the short term, everything comes up roses. Zuckerberg: 1, Musk: 0, right?

Remember that the short term extends about 20 years. After that, we enter the era where AI will grow beyond ANI into AGI: Artificial General Intelligence. That means human-level problem solving abilities capable of being applied to any problem. Except that anything that gets there will have done so by having the ability to improve its own learning speed, and there is no reason for it to stop when it gets on a par with humans. It will go on to exceed our abilities by orders of magnitude, and will be connected to the world’s infrastructure in ways that make wreaking havoc trivially easy. It takes only a bug—not even consciousness, not even malevolence—for something that powerful to take us back to the Stone Age. Fortunately, history shows that Version 1.0 of all significant software systems is bug-free.

Oops.

Elon Musk and I don’t want that to be on the cover of the last issue of Time magazine ever published. Zuckerberg is more of a developer and I have found that it is hard for developers to see the existential risks here, probably because they developed the code, they know every line of it, and they know that nowhere in it resides the lines

if ( threatened ) {
    wipe_out_civilization();
}

Of course, they understand about emergent behavior; but when they’ve spent so much time so close to software that they know intimately, it is easy to pooh-pooh assertions that it could rise up against us as uninformed gullibility. Well, I’m not uninformed about software development either. And yet I believe that it could be soon that we are developing systems that does display drastic emergent behavior, and that by then it will be too late to take appropriate action.

Whether this cascade of crisis happens in 20 years, 15, or 30, we should start preparing for it now before we discover that we ought to have nudged this thing in another direction ten years earlier. And since it requires a vastly elevated understanding of human ethics, it may well take decades to learn what we need to make our AGIs have not just superintelligence, but supercompassion.

Artificial IntelligenceEmployment

Keep on Truckin’

An article on Bloomberg suggests that in the short term at least, autonomous trucks have the potential to make the lives of truckers better by allowing them to teleoperate trucks and therefore see their families at night. Of course, many of them see this as the prelude to not being needed at all:

“I can tell the difference between a dead porcupine and a dead raccoon, and I know I can hit a raccoon, but if I hit a porcupine, I’m going to lose all the tires on the truck on that side,” says Tom George, a veteran driver who now trains other Teamsters for the union’s Washington-Idaho AGC Training Trust. “It will take a long time and a lot of software to program that competence into a computer.”

Perhaps.  Or maybe it just takes driving long enough in reality or in training on captured footage to encounter both kinds of roadkill and learn by experience.

Artificial IntelligenceEmployment

This Time It’s Different

This superb video drives a stake through the heart of the meme that progress always equals more and better jobs:

All this and a cast of cartoon chickens. This is where it very much becomes clear that we need to analyze second-order effects. The video just starts wondering about those at the end. If we get very good at producing cheaper products at the expense of more and more jobs, who will buy those products? Who will be able to afford them if there is a rising underclass of unemployed that has trouble getting food, let alone iPhones? Sure, the market may turn to higher luxury items such as increasingly tricked-out autonomous cars, that can be afforded by the 1% (or less) who own the companies, but this is an unstable dynamic, a vicious circle. What will terminate that runaway feedback loop?

Artificial IntelligenceEmployment

When will a machine do your job better than you?

Katja Grace at the Future of Humanity Institute at the University of Oxford and fellow authors surveyed the world’s leading researchers in artificial intelligence by asking them when they think intelligent machines will better humans in a wide range of tasks. They averaged the answers, and published them at https://arxiv.org/pdf/1705.08807.pdfThe results are… surprising.

First up, AIs will reach human proficiency in the game of Go in 2027… wait, what? Ah, but this survey was conducted in 2015. As I noted in Crisis of Control, before AlphaGo beat Lee Sedol in 2016, it was expected to be a decade before that happened; here’s the numeric proof. This really shows what a groundbreaking achievement that was, to blindside so many experts.

Forty-eight percent of respondents think that research on minimizing the risks of AI should be prioritized by society more than the status quo. And when they analyzed the results by demographics, only one factor was significant: geography. Asian researchers think human level machine intelligence will be achieved much sooner:Screen Shot 2017-06-01 at 12.58.26 PM

Amusingly, their predictions for when different types of job will be automated are relatively clustered under 50 years from now with one far outlier over 80:  Apparently, the job of “AI Researcher” will take longer to automate that anything else, including surgeon. Might be a bit of optimism at work there…

 

Artificial IntelligenceEmploymentTechnology

Sit Up and Beg

More reader commentary:

“If in the old view programmers were like gods, authoring the laws that govern computer systems, now they’re like parents or dog trainers. […] Programming won’t be the sole domain of trained coders who have learned a series of arcane languages. It’ll be accessible to anyone who has ever taught a dog to roll over.”

Totally agree except this will not be as easy as some may think. I think the most important part of great programmers is not their programming skill but their ability to take a small number of broad requirements and turn them into the extremely detailed requirements necessary for a program to succeed in most/all situations and use cases, e.g. boundary conditions. As somewhat of an aside we hear even today about how a requirements document given to developers should cover ‘everything’. If it really covered everything it would have to be on the order of the number of lines of code it takes to create the program.

If there’s been anything about developers that elevated them to some divine level, it isn’t their facility with the proletarian hardware but their ability to read the minds of the humans giving them their requirements, to be able to tell what they really need, not just better than those humans can explicate, but better than they even know. That talent, in the best developers (or analysts, if the tasks have been divided), is one of the most un-automatable acts in employment.

The quotation was from Wired magazine, and I think, however, that it has to be considered in a slightly narrow context. Many of the tough problems being solved by AIs now are done through training. Facial recognition, voice recognition, medical scan diagnosis; the best approach is to train some form of neural network on a corpus of data and let it loose. The more problems that are susceptible to that approach, the more developers will find their role to be one of mapping input/output layers, gathering a corpus, and pushing the Learn button. It will be a considerable time (he said, carefully avoiding quantifying ‘considerable’) before that’s applicable to the general domain of “I need a process to solve this problem.”