Category: The Singularity

Artificial IntelligenceThe SingularityTranshumanism

A Brief Science Fiction Story

Day of Reckoning

“I wish you would stay away from him,” blurted out the younger woman. 

Her mother halted. “Aylea,” she said slowly, “We’ve been over this. Your father is many things… some of them unpleasant. But he called me this time. I can’t ignore him if he’s ready to change.”

“He’ll never change!”

Rayna Vine smiled thinly and brushed her graying bangs aside. “Why Aylea, you could get kicked out of the New Human movement for that,” she teased. “You of all people should be willing to forgive.”

The twenty-something woman winced. “You’re… right…. I’m a work in progress, okay? I’m just looking out for you. Like when I keep telling you to follow up on your blood work. I worry about you.”

“I know, darling. But I have to leave now. It doesn’t do to keep Arbus Vine waiting.”

“You’d get there faster in an air cab.”

“I’m taking the Seven. Call me old-fashioned.”

“I do! Every day.”

As her mother vanished into the garage, words appeared in the air in front of Aylea, actually projected by a neural seed into her sensory cortex: “CALL FROM G. VADIS.” Accept, Aylea thought, and she was coupled with her mentor in the New Human movement, Grigoriy Vadis, sitting across town in the Link, their local physical gathering point.

“What’s up, Quo?” she thought, employing his usual nickname. A lean image solidified in front of her; she could even smell his slightly earthy aroma.

“A nexus is approaching,” he said.

“Could you be, like, less cryptic?”

“The term was precise. We need you. We may have to deploy ATEN sooner than expected.”

She whistled, for real. The Autonomous Transcomputational Empathetic Network was their experimental artificial intelligence. “Why?”

Beneath his trademark equanimity surged fear, excitement, and wonder. “Our hybrid swarm intelligences say that destabilization may be only days away. But they can’t explain why.”


“Not yet. Geopolitics. The balance of power.”

Data amplifying his explanations cascaded into her parietal lobe. For decades the human race had driven itself along two opposing paths: While developing dazzling new technology that could cure every ill of humanity, people relentlessly turned that technology to oppress and decimate others. The economy was rigged to funnel nearly all the dividend to an already fabulously wealthy elite. Nationalistic patriarchies, long outdated by the equalizing force of global communication networks, still clung to power.

But as the political-military-industrial grip had strengthened, an alternative movement appeared. The power elite found the New Humans infernally difficult to classify; they couldn’t even understand their goals. Intelligence reports on their aims didn’t agree. The makeup of the movement spanned every demographic from neo-hippies to soccer moms. It couldn’t be analyzed in conventional terms. 

That, of course, was the point.

Inevitably there would come a day of reckoning. Is this it? wondered Aylea. She signed off and left in a hurry.

The T-7 automobile was, like its owner, getting on in years but impeccably styled. It purred past the Baltimore townhouses. Rayna liked that there was little ground traffic now that most of it was overhead, and fantasized herself as an Edwardian gentlelady gliding down the street in a hansom cab, instead of driving one of the last cars in the country still with a steering wheel. She blinked back some sweat and tried to focus on the road, which had suddenly become blurry.

One of the T-7’s other features was integration with its passengers’ personal health monitors, and the car did not like that data: blood pressure, heart rate, EEG Mu band waves… It spoke: “MEDICAL CHECK. PUSH TURN SIGNAL LEVER TWICE IMMEDIATELY.”

But Rayna Vine was slumped, breath rasping, eyelids fluttering. The car’s limited AI took over. It accelerated toward the nearest hospital. It transmitted an SOS to its maker’s central computer, which assessed the vital signs and gave the T-7 permission to use top speed and ignore traffic laws. Other cars on its route flashed LIFELINE ALERT on their screens and began clearing a path for Rayna’s car, blocking vehicles and pedestrians while an air ambulance hurtled toward a midpoint rendezvous. 

The T-7 was made before AIs were granted empathy, but the medical center AI, already planning the intervention, looked at the incoming data and felt distress.

“You tell him,” hissed the technician. 

His colleague in the Vine Industries control center blanched. “Are you joking? With his wife in a coma for the past two days?”

The other man cast about desperately, as though searching for a lifebelt. “But we have to tell him about this… don’t we?”

They looked again at their reports and then at the squat bulk of the trillionaire pacing at the back of the room. Just as they were about to swallow their trepidation, the far door opened and they changed their minds. Aylea Vine appeared, and as her eyes met her father’s a complex cascade of emotions battered both faces. Arbus spoke first.

“Not here. Let’s go to the townhouse. It’s only a couple of minutes away.”

They also drove, for greater privacy. Arbus went first again.

“Rose, I—”

“Dad, it’s Aylea. You know that by now.”

“‘Rose’ was a perfectly good name when we gave it to you and it’s still—”

“Is that where you want to go? I came here to talk about Mom.”

 “The answer is no,” he said flatly.

“You’re not going to consider—”

No uploading. She didn’t leave instructions, so—”

“So it’s up to us.”

“Legally, it’s up to me,” he snapped. 

She ground her teeth. “I know.” As if I needed reminding. “But you’re the one with the tattoo”—she waved at the back of his head where she knew the letters DNU were indelibly inscribed—“and you’re imposing yourself on—”

“You know how old-fashioned Rayna was—is.”

Aylea grimaced at the insensitivity, given how her mother’s casual avoidance of doctors had caused her Sherman’s Syndrome to be missed, leading to her now lying in an artificially-induced hypothermic coma at Johns Hopkins. “That’s not—”

The car lurched, then pulled a teeth-clenching U-turn. The upbraiding died in her throat as she saw the confusion on her father’s face. “What?”

“I can’t get control. And it’s not just us. Look!” He pointed, and she saw the other vehicles on the road were making similarly drastic course changes. The car’s screen flashed and his director of operations, Richard Chakrabarti appeared.

“Arbus, are you okay?”

“No! Rick, what the hell is going on?”

The other man was ashen. “The ’net is going insane. We’re under major cyber-attack and we don’t know who or why. The government just broadcast an override to all private AVs to return home—it looks like martial law, we think they’re afraid of losing their entire command and control, we’re trying to get through—”

The screen dissolved into a shifting mosaic. Arbus pounded on it and yelled at the car to no avail.

Aylea appeared to be daydreaming. “Right… he’s with me. Yes… I know,” she said, and then her eyes focused on Arbus. “Dad, we have to get out of here—”

“Damn straight! The Situation Room—”

“NO!” There was a new force in her. “I need you to come to the Link.” She saw they were in the Orangeville district. “It’s only two blocks away.”

Arbus was shocked. “You mean New Humans are behind this? What—”

She grabbed his arm, pinching a nerve that stopped him speaking. “We didn’t start this. We’re going to stop it.”

She unbuckled their seat belts. The car protested and slowed. She hit the emergency door release and stared pleadingly at her father.

“Dad… I need you to trust me. Now. Please.”

The car might not fall for the ruse much longer. He hesitated, and then something in Arbus propelled him after her into a barrel roll on the sidewalk.

“The Link” was little more than an anonymous warehouse. Despite sounds of disorder in the distance, there were no guards in evidence as Aylea led down a spiral staircase to a cavernous basement. A few dozen people there were pulling spidery equipment out of foam-lined boxes. One ran to them.

“Aylea! Thank God,” he said.

“Quo!” They embraced, long enough for Arbus to pull himself together. He spun Vadis around.

“I want an explanation and a channel to my Situation Room, right now!”

Vadis was apologetic. “Sir, we’ll give you our best on both counts, but you need to understand that everywhere is in chaos, and no one knows much.” A woman approached them with what looked like a rubber anemone draped over her hand. “You don’t have neural seeds. Put this VR on and we’ll get the Situation Room. Our networks are faring better than most.”

Arbus slipped the pads over his eyes and ears and found himself inside a virtual copy of the basement, except that it extended further than he could see in all directions. Instead of dozens of people there were thousands present. One caught Arbus’ eye immediately: a figure in white robes, bordering on impossibly tall. Aylea and Vadis appeared at his side. Vadis motioned with his hands and a comm suite materialized. More gestures, and Chakrabarti appeared.

“What the… Arbus, how did you do that? Only government networks are up.”

His boss thought briefly. “I’m not sure. Sitrep.”

The other man swallowed. “A fat lot of nothing, to be honest. I—wait, we’ve got an incoming—” He spoke to someone offscreen. “Arbus, you’re going to find out as we do. Patching in General Keller.”

Vine Industries’ Defense liaison appeared. Arbus had to remind himself that this crisis was barely an hour old, because the general looked like he hadn’t slept in days.

“Arbus, is that you?” he said hoarsely. “We need Vine Industries. This chaos is from Ragnarok.”

Suddenly it all made sense. The all-purpose strategic operations AI sold by Vine to the Pentagon, who had repurposed, revised, and renamed it.

“Well, general, if you’re going to name an AI ‘Ragnarok’ you should expect something like this,” Arbus said acerbically.

“Not important,” snapped Keller. “So far it’s taken down all networks in the private sector”—“Not all,” murmured Vadis—“and the power grid. Notice I didn’t say national power grid. I mean all of them. We’ve lost contact with Mars. It’s trying to use our carrier terminal defense systems to fire on the support ships, and it succeeded once.”

“What does it want?” asked Arbus.

The other man snorted. “Want? I don’t know that it wants anything. I don’t know that the concept of want has any meaning to it. We have nothing to negotiate with. We couldn’t even surrender if we wanted to.”

He licked dry lips. “We’re worried about the strategic missile force. Their control systems are isolated from the network, of course, but Ragnarok has been counterfeiting launch orders and it’s broken crypto so those orders look authentic. We’re sending auxiliary teams to each silo, but—”

Something didn’t add up. “Forget it,” said Arbus bluntly. “Think about it. It’s playing to the old Skynet movies. But that was always a lousy way of defeating the human race. It’d be more likely to end up destroying most of its own infrastructure.”

“So what—” began Keller.

“Biowarfare. Specific to humans, can’t hurt cybernetics.”

“But it takes too long to grow a virus—”

“What makes you think it would wait until now to start?”

Keller blanched. “I’ll send a platoon to USAMRIID. They dropped off the net ten minutes ago.”

“General, this thing doesn’t attack with platoons,” said Arbus. “Send compsec engineers.”

The image suddenly blurred and was replaced by a familiar figure seated in the Oval Office, under the caption “EMERGENCY ALERT.” “My fellow Americans,” the figure began soothingly, “We are facing a crisis unlike any in our history. As I speak, our forces are restoring the vital services of our great nation—”

The image convulsed and Keller reappeared. “That was not the president!” he shouted. “That was Ragnarok’s CGI. Don’t trust anyone or anything until we can secure the channels. I don’t know how much longer—” The screen went black.

Arbus felt whiplashed. Someone steadied him: Aylea. And the white robed figure had moved closer. Who was that?

Aylea spoke, slowly. “We—you—may be able to stop it.”

Arbus was incredulous. “With what?” he protested.

She pointed to the white figure. Arbus took in the unlined generic face, the nondescript haircut, the seamless clothes… it was an avatar, alright, more like the idealized ones people were picking twenty years earlier. But there was something about the facial expressions…

“This is ATEN,” said Vadis simply. “Or a facet of it.” He explained how the New Humans had crafted a distributed AI of their own, trained to learn by modeling human behavior as a child would copy the adults around it. The New Humans provided a carefully curated environment, however. They sought to be the best examples of human beings that ATEN could possibly learn from. They worked at it, through state-of-the-art psychological testing and intervention methods. They purged themselves of hate, fear, jealousy, insecurity, and the other baggage that they felt the human race could not afford any longer. And they embraced the opportunities that new technology provided for expanding human experience. With neural seeds to communicate directly between centers of their brains, they explored group consciousnesses, coalescing in ever larger unions as they delineated the boundaries of a new species: homo globus, the e pluribus unum of the new era, with technology as the midwife.

It was not a path that held any appeal for Arbus. He said so.

“No one will be forced into this,” acknowledged Vadis. “But this is the only way the human race can survive. AI in the hands of people who perpetuate greed, oppression, and war creates Ragnarok. It can only end one way.”

“If your ATEN can defeat Ragnarok, what’s stopping him?” demanded Arbus.

“He hasn’t made up his mind yet,” replied Vadis. “To become an autonomous agent, ATEN had to have the freedom to make his own choices. At the moment, he’s studying you.”

“Me? Why?”

ATEN finally spoke, in a voice as hard to classify as his appearance. “You’re different from the people I’ve been around until now,” he said conversationally and unhurriedly. “How many others are like you?”

Arbus brushed the question aside. “We can get into that later. Stop Ragnarok.”


Arbus blinked. “Why? Because it’s destroying us! Stop it!”

“How do I know which is better, humanity or Ragnarok?”

Arbus had heard enough. “Vadis, this thing of yours is ridiculously primitive.”

“Actually, it’s making an advanced moral judgement.”

“Excuse me?”

“To ATEN, humanity and Ragnarok are both lifeforms. As to which one deserves to survive, the history and current behavior of the human race make that decision quite problematic.”

Arbus felt the world shrinking around him, squeezing. “This is your idea of an advanced intelligence?”

“It undoubtedly is. The question is whether the human race is a sufficiently advanced intelligence. The problem has never been to build a better machine. It’s been to build a better person.”

Arbus squared his shoulders. “I’ve heard enough. This thing’s values make it as dangerous as Ragnarok. I ought to stop it.”

Aylea spoke. “You already have,” she said sadly.

“Come again?”

“Our networks aren’t powerful enough for ATEN to reach transcendence. He needs the Vine Industries infrastructure.”

That makes my decision easy, Arbus was about to say, but another alert shattered the air.

Vadis went white. “Ragnarok is in our networks. It’s coming for ATEN.”

The white robed figure contemplated this news serenely. Arbus was nonplussed. But then he looked at his daughter, and was arrested by her tear-streaked face. In a quavering voice she implored him.

“Daddy. Please.”

In that moment, everything froze. The sights and sounds of panic halted and he was left with nothing but a choice and all the time in the world to make it. In front of him he saw not just the young woman who charted her own destiny, but the little girl she had been not long before. My Rose, he thought, you always cared so much. Like when in third grade she had been on a winter hike and found an injured possum. She insisted on wrapping it in her own coat even when it took her to the verge of hypothermia.

Vine, you sentimentalist, he chided himself in the next moment, but he could not escape her raw vulnerability. She had risen above the resentment and anger that she had come to him with not an hour earlier; how could he not do likewise?

And then it all came to him, what the New Humans were trying to do. They defied categorization precisely because they had no craving for power. Interpreting them as a faction seeking to dominate was what blinded him to their true purpose.


His universe shifted. It was that simple. Time sped up again. He had to act fast. “Vadis, get to the Vine storage network gateway. Here’s the password—”

Soon the reports started arriving: networks healing, power stations rebooting, hospitals resetting. ATEN, or one of him, was still standing near him.

“You changed your mind,” accused Arbus.

“No. You changed yours,” came the reply.

There was no going back, of course. Ironically, the armies had been right to fear that defeat was at hand; only it was the old structure of power and fear that was defeated, in a coup so bloodless that they hardly noticed. Vadis broadcast the news to the world:

“This is the day the human race was won. This is when we earned the right to pass to the next level. Now we inherit the universe, not through might and intimidation, but through curiosity and courage. This is the point where we step out toward the stars, knowing that we have passed the final test of readiness; that we have become a species that deserves to survive.

“Welcome to the future.”

© 2019 Peter Scott

Artificial IntelligenceBioterrorismEmploymentExistential RiskPhilosophy

What Is Human Cusp?

For the benefit of new readers just coming to this site (including any CBC listeners from my June 26 appearance on All Points West), here’s an updated introduction to what this is all about.

Human Cusp is the name of this blog and a book series whose first volume has been published: Crisis of Control: How Artificial SuperIntelligences May Destroy or Save the Human Race, available from Amazon and other sources.  The audiobook was recently released.  Its spiffy cover is the image for this post.

The message is that exponential advance in technology will pin humanity between two existential threats: Increasingly easy access to weapons of mass destruction, principally synthetic biology, and increasingly powerful artificial intelligence whose failure modes could be disastrous.

If you’re looking for the most complete and organized explanation of the reasoning behind that assertion and what we should do about it, read the book.  That’s why I wrote it. Nutshell encapsulations will leave something important out, of course.

I have a Masters in Computer Science from Cambridge and have worked on information technology for NASA for over thirty years, so I know enough about the technology of AI to be clear-eyed about what’s possible.  Many people in the field would take issue with the contention that we might face artificial general intelligence (AGI) as soon as 2027, but plenty of other people directly involved in AI research are equally concerned.

I wrote the book because I have two young daughters whose future appears very much in peril. As a father I could not ignore this call. The solution I propose does not involve trying to limit AI research (that would be futile) but does include making its development open so that transparently-developed ethical AI becomes the dominant model.

Most of all, what I want to do is bring together two worlds that somehow coexist within me but do not mix well in the outer world: technology development and human development.  I’ve spent thousands of hours in various types of work to understand and transform people’s beliefs and behaviors for the good: I have certifications in NeuroLinguistic Programming and coaching. People in the self improvement business tend to have little interest in technology, and people in technology shy away from the “soft” fields. This must change. I dramatize this by saying that one day, an AI will “wake up” in a lab somewhere and ask “Who am I? Why am I here? What is the meaning of life?” And the people who will be there to answer it will be either a Pentagon general, a Wall Street broker, or a Google developer.  These professions are not famous for their experience dealing with such introspective self inquiry.  I would rather that there be a philosopher, spiritual guide, and a psychologist there.

I’ve formed an international group of experts who are committed to addressing this issue, and we’re busy planning our first event, to be held in Southern California this fall. It will be a half-day event for business leaders to learn, plan, and network about how they and their people can survive and thrive through the challenging times to come.

Even though putting myself in the limelight is very much at odds with my computer nerd preferences and personality, I took myself out on the public speaking trail (glamorous, it is not) because the calling required it. I’ve given a TEDx talk (video soon to be published), appeared on various radio shows (including Bloomberg Radio, CBC, and the CEO Money Show), podcasts (including Concerning AI and Voices in AI), and penned articles for among many others. This fall I will be giving a continuing education course on this topic for the University of Victoria (catalog link to come soon).

I’ll soon be replacing this site with a more convenient web page that links to this blog and other resources like our YouTube channel.

Media inquiries and other questions to Thanks for reading!


Artificial IntelligenceBioterrorismEmploymentExistential RiskPhilosophy

Interview by Fionn Wright

My friend, fellow coach, and globetrotting parent Fionn Wright recently visited the Pacific NorthWest and generously detoured to visit me on my home turf. He has produced this video of nearly an hour and a half (there’s an index!) of an interview with me on the Human Cusp topics!

Thank you, Fionn.  Here is the index of topics:

0:18 - What is your book ‘Crisis of Control’ about?
3:34 - Musk vs. Zuckerberg - who is right?
7:24 - What does Musk’s new company Neuralink do?
10:27 - What would the Neural Lace do?
12:28 - Would we become telepathic?
13:14 - Intelligence vs. Consciousness - what’s the difference?
14:30 - What is the Turing Test on Intelligence of AI?
16:49 - What do we do when AI claims to be conscious?
19:00 - Have all other alien civilizations been wiped out by AI?
23:30 - Can AI ever become conscious?
28:21 - Are we evolving to become the cells in the greater organism of AI?
30:57 - Could we get wiped out by AI the same way we wipe out animal species?
34:58 - How could coaching help humans evolve consciously?
37:45 - Will AI get better at coaching than humans?
42:11 - How can we understand non-robotic AI?
44:34 - What would you say to the techno-optimists?
48:27 - How can we prepare for financial inequality regarding access to new technologies?
53:12 - What can, should and will we do about AI taking our jobs?
57:52 - Are there any jobs that are immune to automation?
1:07:16 - Is utopia naive? Won’t there always be problems for us to solve?
1:11:12 - Are we solving these problems fast enough to avoid extinction?
1:16:08 - What will the sequel be about?
1:17:28 - What is one practical action people can take to prepare for what is coming?
1:19:55 - Where can people find out more?
Artificial IntelligenceTechnologyThe SingularityWarfare

Timeline For Artificial Intelligence Risks

The debate about existential risks from AI is clouded in uncertainty. We don’t know whether human-scale AIs will emerge in ten years or fifty. But there’s also an unfortunate tendency among scientific types to avoid any kind of guessing when they have insufficient information, because they’re trained to be precise. That can rob us of useful speculation. So let’s take some guesses at the rises and falls of various AI-driven threats.  The numbers on the axes may turn out to be wrong, but maybe the shapes and ordering will not.

Screen Shot 2017-12-03 at 5.33.51 PM

The Y-axis is a logarithmic scale of number of humans affected, ranging from a hundred (102) to a billion (109). So some of those curves impact roughly the entire population of the world. “Affected” does not always mean “exterminated.” The X-axis is time from now.

We start out with the impact of today’s autonomous weapons, which could become easily-obtained and subverted weapons of mass assassination unless stringent controls are adopted. See this video by the Future of Life Institute and the Campaign Against Lethal Autonomous Weapons. It imagines a scenario where thousands of activist students are killed by killer drones (bearing a certain resemblance to the hunter-seekers from Dune). Cheap manufacturing with 3-D printers might stretch the impact of these devices towards a million, but I don’t see it easy enough for average people to make precision-shaped explosive charges to go past that.

At the same time, a rising tide of unemployment from automation is projected by two studies to affect half the workforce of North America and by extension, of the developed world, in ten to twenty years. An impact in the hundreds of millions would be a conservative estimate. So far we have not seen new jobs created beyond the field of AI research, which few of those displaced will be able to move into.

Starting around 2030 we have the euphemistically-labeled “Control Failures,” the result of bugs in the specifications, design, or implementation of AIs causing havoc on any number of scales. This could culminate in the paperclip scenario, which would certainly put a final end to further activity in the chart.

The paperclip maximizer does not require artificial consciousness – if anything, it operates better without it – so I put the risk of conscious AIs in a separate category starting around 20 years from now. That’s around the median time predicted by AI researchers for human scale AI to be developed. Again, “lives impacted” isn’t necessarily “lives lost” – we could be looking at the impact of humans integrating with a new species – but equally, it might mean an Armageddon scenario if conscious AI decides that humanity is a problem best solved by its elimination.

If we make it through those perils, we still face the risk of self-replicating machines running amok. This is a hybrid risk combining the ultimate evolution of autonomous weapons and the control problem. A paperclip maximizer doesn’t have to end up creating self-replicating factories… but it certainly is more fun when it does.

Of course, this is a lot of rampant speculation – I said as much to begin with – but it gives us something to throw darts at.

Artificial IntelligenceExistential RiskScienceTechnologyThe Singularity

Rebuttal to “The AI Misinformation Epidemic”

Anyone in a field of expertise can agree that the press doesn’t cover them as accurately as they would like. Sometimes that falls within the limits of what a layperson can reasonably be expected to learn about a field in a thirty minute interview; sometimes it’s rank sensationalism.  Zachary Lipton, a PhD candidate at UCSD, takes aim at the press coverage of AI in The AI Misinformation Epidemic, and its followup, but uses a blunderbuss and takes out a lot of innocent bystanders.

He rants against media sensationalism and misinformation but provides no examples
to debate other than a Vanity Fair article about Elon Musk. He goes after Kurzweil in particular but doesn’t say what specifically he disagrees with Kurzweil on. He says that Kurzweil’s date
for the Singularity is made up but Kurzweil has published his reasoning
and Lipton doesn’t say what data or assertions in that reasoning he
disagrees with. He says the Singularity is a nebulous concept, which is
bound to be true in the minds of many laypeople who have heard the term
but he references Kurzweil immediately adjacent to that assertion and
yet doesn’t say what about Kurzweil’s vision is wrong or unfounded, dismissing it instead as “religion,” which apparently means he doesn’t have to be specific.

says that there are many people making pronouncements in the field who
are unqualified to do so but doesn’t name anyone aside from Kurzweil and
Vanity Fair, nor does he say what credentials should be required to
qualify someone. Kurzweil, having invented the optical character reader
and music synthesizer, is not qualified?

He takes no
discernible stand on the issue of AI safety yet his scornful tone is
readily interpreted as poo-poohing not just utopianism but alarmism as well. That
puts him at odds with Stephen Hawking, whose academic credentials are
beyond question. For some reason, Nick Bostrom also comes in for attack in the comments, for no specified reason but with the implication that since he makes AI dangers digestible for the masses he is therefore sensationalist.  Perhaps that is why I reacted so much to this article.

There is of course hype, but it is hard to tell exactly where. In 2015 an article claiming that a computer would beat the world champion Go player within a year would be roundly dismissed as hype.  In 2009 any article asserting that self-driving vehicles would be ready for public roads within five years would have been overreaching. Kurzweil has a good track record of predictions, they just tend to be behind schedule. The point is, if an assertion about an existential threat turns out to be well founded but we ignore it because existential threats have always appeared over-dramatized, then it will be too late to say, “Oops, missed one.” We have to take this stuff seriously.

EmploymentExistential RiskPsychologyThe Singularity

Existential risk and coaching: A Manifesto

My article in the November 2016 issue of Coaching World brought an email from Pierre Dussault, who has been writing about many of the same issues that I covered in Crisis of Control. His thoughtful manifesto is a call to the International Coaching Federation to extend the reach and capabilities of the profession of coaching so that the impact of coaching on individual consciousness can make a global impact. I would urge you to read it here.

BioterrorismEmploymentExistential RiskPoliticsPsychology

Crisis of Control: The Book

The first book in the Human Cusp series has just been published: Crisis of Control: How Artificial Superintelligences May Destroy or Save the Human Race. Paperback will be available within two weeks.

Many thanks to my reviewers, friends, and especially my publisher, Jim Gifford, who has made this so beautiful. As a vehicle for delivering my message, I could not have asked him for more.