Category: Transhumanism

Artificial IntelligenceTranshumanism

Book Review – “Becoming a Butterfly”

Recently on my AI and You podcast, my guest was Tony Czarnecki, author of “Becoming a Butterfly,” available from Amazon.

This is a prescient work that deserves your attention. It is the third in the “POSTHUMANS” series, whose prequels are “Federate to Survive!” – enumerating our existential threats – and “Democracy for a Human Federation” – how we might survive those threats. This work completes the trilogy by asking who humanity might become after a successful survival, building from Tony’s earlier book “Who Could Save Humanity From Superintelligence?.”

Tony is thinking on a grand scale, as you might expect from a key speaker at the London Futurists. He thinks out to the logical conclusion of the trends we are beginning to experience now, and foresees – as many of us do – a rendezvous with destiny that goes either very well or very badly for us all. The path to the favorable outcome requires us to assume control over our own evolution, in Tony’s view, and so he lays out how that ought to happen. These sweeping prescriptions, such as “Build a planetary civilization,” may appear thoroughly unrealistic; Tony acknowledges this, but is unafraid to make the case, and readers of this blog will know that it is one I share. We can hope at least for a Hundredth Monkey Effect. Tony repeatedly outlines how surviving the near-term existential threats will propel us to a condition where we will be resilient against all of them.

Tony delineates the exponential factors driving us towards this nexus and then describes the attributes of a planetary civilization: operating at level 1 on the Kardashev scale, able to harness the total energy available to the entire planet. (We’re at around 0.75 on this scale at the moment.) To get further, though, we need to be more resilient against the kind of threats that accelerate as our population and technology do, and here the author uses current experience in the pandemic to illustrate his point while giving a numeric treatment of threat probabilities.

Tony is happy to make specific suggestions as to what the government should do to achieve that resiliency; the problem is that those suggestions, while naturally pitched at the government of the author’s homeland of the United Kingdom, need to be picked up on a global scale. One government acting alone cannot expect these measures to gain traction any more than one government could make a Paris Climate Accord. (With the possible exception of the United States and its power to wield the global reserve currency like a baseball bat.)

Czarnecki then tackles the subject of superintelligence: What drives the evolution of AI, what might the risks of superintelligence be, can it be conscious, and how should we curate it? This is where he connects the dots with transhumanism. This, of course, is a touchy subject. Many people are petrified at even the most optimistic scenarios of how humanity might evolve in partnership with AI, and futurists owe it to this audience to provide the most reassurance we can.

Czarnecki refers extensively to the need to federate, which was laid out in one of his earlier books. His examples are Europe-based and North American audiences would find the book more relatable with some that were taken from their experience. In particular, Americans in general are somewhat allergic to the United Nations and Czarnecki’s proposals should clearly demarcate for them the limits of power he suggests they exercise. He recognizes this by suggesting that the USA may be among the countries not participating in the world government he proposes, but this strikes me as leaving out an essential ally in plans of this scope. I’ll leave you to discover in the book which body he settles on identifying as the best candidate for leading the world into the new federation. (And Star Trek fans can hardly object to plans for creating a Federation, no?)

There is much more, including discussions of potential pitfalls, economic realities, and likely scenarios for armed conflict along the way to what Tony calls the Novacene – the new era. The treatments of these sweeping paths are undertaken with a thoroughness that suggests the book’s application as a textbook in the right kind of course – perhaps The History of the Future. Listeners of my podcast know that my thoughts tend in the direction of education already.

In summary, Becoming a Butterfly is a serious work, to be taken seriously. Don’t try to skim through it in a few sessions; it demands your engagement and will reward it accordingly.

Artificial IntelligenceThe SingularityTranshumanism

A Brief Science Fiction Story

Day of Reckoning

“I wish you would stay away from him,” blurted out the younger woman. 

Her mother halted. “Aylea,” she said slowly, “We’ve been over this. Your father is many things… some of them unpleasant. But he called me this time. I can’t ignore him if he’s ready to change.”

“He’ll never change!”

Rayna Vine smiled thinly and brushed her graying bangs aside. “Why Aylea, you could get kicked out of the New Human movement for that,” she teased. “You of all people should be willing to forgive.”

The twenty-something woman winced. “You’re… right…. I’m a work in progress, okay? I’m just looking out for you. Like when I keep telling you to follow up on your blood work. I worry about you.”

“I know, darling. But I have to leave now. It doesn’t do to keep Arbus Vine waiting.”

“You’d get there faster in an air cab.”

“I’m taking the Seven. Call me old-fashioned.”

“I do! Every day.”

As her mother vanished into the garage, words appeared in the air in front of Aylea, actually projected by a neural seed into her sensory cortex: “CALL FROM G. VADIS.” Accept, Aylea thought, and she was coupled with her mentor in the New Human movement, Grigoriy Vadis, sitting across town in the Link, their local physical gathering point.

“What’s up, Quo?” she thought, employing his usual nickname. A lean image solidified in front of her; she could even smell his slightly earthy aroma.

“A nexus is approaching,” he said.

“Could you be, like, less cryptic?”

“The term was precise. We need you. We may have to deploy ATEN sooner than expected.”

She whistled, for real. The Autonomous Transcomputational Empathetic Network was their experimental artificial intelligence. “Why?”

Beneath his trademark equanimity surged fear, excitement, and wonder. “Our hybrid swarm intelligences say that destabilization may be only days away. But they can’t explain why.”

“Singularity?”

“Not yet. Geopolitics. The balance of power.”

Data amplifying his explanations cascaded into her parietal lobe. For decades the human race had driven itself along two opposing paths: While developing dazzling new technology that could cure every ill of humanity, people relentlessly turned that technology to oppress and decimate others. The economy was rigged to funnel nearly all the dividend to an already fabulously wealthy elite. Nationalistic patriarchies, long outdated by the equalizing force of global communication networks, still clung to power.

But as the political-military-industrial grip had strengthened, an alternative movement appeared. The power elite found the New Humans infernally difficult to classify; they couldn’t even understand their goals. Intelligence reports on their aims didn’t agree. The makeup of the movement spanned every demographic from neo-hippies to soccer moms. It couldn’t be analyzed in conventional terms. 

That, of course, was the point.

Inevitably there would come a day of reckoning. Is this it? wondered Aylea. She signed off and left in a hurry.


The T-7 automobile was, like its owner, getting on in years but impeccably styled. It purred past the Baltimore townhouses. Rayna liked that there was little ground traffic now that most of it was overhead, and fantasized herself as an Edwardian gentlelady gliding down the street in a hansom cab, instead of driving one of the last cars in the country still with a steering wheel. She blinked back some sweat and tried to focus on the road, which had suddenly become blurry.

One of the T-7’s other features was integration with its passengers’ personal health monitors, and the car did not like that data: blood pressure, heart rate, EEG Mu band waves… It spoke: “MEDICAL CHECK. PUSH TURN SIGNAL LEVER TWICE IMMEDIATELY.”

But Rayna Vine was slumped, breath rasping, eyelids fluttering. The car’s limited AI took over. It accelerated toward the nearest hospital. It transmitted an SOS to its maker’s central computer, which assessed the vital signs and gave the T-7 permission to use top speed and ignore traffic laws. Other cars on its route flashed LIFELINE ALERT on their screens and began clearing a path for Rayna’s car, blocking vehicles and pedestrians while an air ambulance hurtled toward a midpoint rendezvous. 

The T-7 was made before AIs were granted empathy, but the medical center AI, already planning the intervention, looked at the incoming data and felt distress.


“You tell him,” hissed the technician. 

His colleague in the Vine Industries control center blanched. “Are you joking? With his wife in a coma for the past two days?”

The other man cast about desperately, as though searching for a lifebelt. “But we have to tell him about this… don’t we?”

They looked again at their reports and then at the squat bulk of the trillionaire pacing at the back of the room. Just as they were about to swallow their trepidation, the far door opened and they changed their minds. Aylea Vine appeared, and as her eyes met her father’s a complex cascade of emotions battered both faces. Arbus spoke first.

“Not here. Let’s go to the townhouse. It’s only a couple of minutes away.”

They also drove, for greater privacy. Arbus went first again.

“Rose, I—”

“Dad, it’s Aylea. You know that by now.”

“‘Rose’ was a perfectly good name when we gave it to you and it’s still—”

“Is that where you want to go? I came here to talk about Mom.”

 “The answer is no,” he said flatly.

“You’re not going to consider—”

No uploading. She didn’t leave instructions, so—”

“So it’s up to us.”

“Legally, it’s up to me,” he snapped. 

She ground her teeth. “I know.” As if I needed reminding. “But you’re the one with the tattoo”—she waved at the back of his head where she knew the letters DNU were indelibly inscribed—“and you’re imposing yourself on—”

“You know how old-fashioned Rayna was—is.”

Aylea grimaced at the insensitivity, given how her mother’s casual avoidance of doctors had caused her Sherman’s Syndrome to be missed, leading to her now lying in an artificially-induced hypothermic coma at Johns Hopkins. “That’s not—”

The car lurched, then pulled a teeth-clenching U-turn. The upbraiding died in her throat as she saw the confusion on her father’s face. “What?”

“I can’t get control. And it’s not just us. Look!” He pointed, and she saw the other vehicles on the road were making similarly drastic course changes. The car’s screen flashed and his director of operations, Richard Chakrabarti appeared.

“Arbus, are you okay?”

“No! Rick, what the hell is going on?”

The other man was ashen. “The ’net is going insane. We’re under major cyber-attack and we don’t know who or why. The government just broadcast an override to all private AVs to return home—it looks like martial law, we think they’re afraid of losing their entire command and control, we’re trying to get through—”

The screen dissolved into a shifting mosaic. Arbus pounded on it and yelled at the car to no avail.

Aylea appeared to be daydreaming. “Right… he’s with me. Yes… I know,” she said, and then her eyes focused on Arbus. “Dad, we have to get out of here—”

“Damn straight! The Situation Room—”

“NO!” There was a new force in her. “I need you to come to the Link.” She saw they were in the Orangeville district. “It’s only two blocks away.”

Arbus was shocked. “You mean New Humans are behind this? What—”

She grabbed his arm, pinching a nerve that stopped him speaking. “We didn’t start this. We’re going to stop it.”

She unbuckled their seat belts. The car protested and slowed. She hit the emergency door release and stared pleadingly at her father.

“Dad… I need you to trust me. Now. Please.”

The car might not fall for the ruse much longer. He hesitated, and then something in Arbus propelled him after her into a barrel roll on the sidewalk.


“The Link” was little more than an anonymous warehouse. Despite sounds of disorder in the distance, there were no guards in evidence as Aylea led down a spiral staircase to a cavernous basement. A few dozen people there were pulling spidery equipment out of foam-lined boxes. One ran to them.

“Aylea! Thank God,” he said.

“Quo!” They embraced, long enough for Arbus to pull himself together. He spun Vadis around.

“I want an explanation and a channel to my Situation Room, right now!”

Vadis was apologetic. “Sir, we’ll give you our best on both counts, but you need to understand that everywhere is in chaos, and no one knows much.” A woman approached them with what looked like a rubber anemone draped over her hand. “You don’t have neural seeds. Put this VR on and we’ll get the Situation Room. Our networks are faring better than most.”

Arbus slipped the pads over his eyes and ears and found himself inside a virtual copy of the basement, except that it extended further than he could see in all directions. Instead of dozens of people there were thousands present. One caught Arbus’ eye immediately: a figure in white robes, bordering on impossibly tall. Aylea and Vadis appeared at his side. Vadis motioned with his hands and a comm suite materialized. More gestures, and Chakrabarti appeared.

“What the… Arbus, how did you do that? Only government networks are up.”

His boss thought briefly. “I’m not sure. Sitrep.”

The other man swallowed. “A fat lot of nothing, to be honest. I—wait, we’ve got an incoming—” He spoke to someone offscreen. “Arbus, you’re going to find out as we do. Patching in General Keller.”

Vine Industries’ Defense liaison appeared. Arbus had to remind himself that this crisis was barely an hour old, because the general looked like he hadn’t slept in days.

“Arbus, is that you?” he said hoarsely. “We need Vine Industries. This chaos is from Ragnarok.”

Suddenly it all made sense. The all-purpose strategic operations AI sold by Vine to the Pentagon, who had repurposed, revised, and renamed it.

“Well, general, if you’re going to name an AI ‘Ragnarok’ you should expect something like this,” Arbus said acerbically.

“Not important,” snapped Keller. “So far it’s taken down all networks in the private sector”—“Not all,” murmured Vadis—“and the power grid. Notice I didn’t say national power grid. I mean all of them. We’ve lost contact with Mars. It’s trying to use our carrier terminal defense systems to fire on the support ships, and it succeeded once.”

“What does it want?” asked Arbus.

The other man snorted. “Want? I don’t know that it wants anything. I don’t know that the concept of want has any meaning to it. We have nothing to negotiate with. We couldn’t even surrender if we wanted to.”

He licked dry lips. “We’re worried about the strategic missile force. Their control systems are isolated from the network, of course, but Ragnarok has been counterfeiting launch orders and it’s broken crypto so those orders look authentic. We’re sending auxiliary teams to each silo, but—”

Something didn’t add up. “Forget it,” said Arbus bluntly. “Think about it. It’s playing to the old Skynet movies. But that was always a lousy way of defeating the human race. It’d be more likely to end up destroying most of its own infrastructure.”

“So what—” began Keller.

“Biowarfare. Specific to humans, can’t hurt cybernetics.”

“But it takes too long to grow a virus—”

“What makes you think it would wait until now to start?”

Keller blanched. “I’ll send a platoon to USAMRIID. They dropped off the net ten minutes ago.”

“General, this thing doesn’t attack with platoons,” said Arbus. “Send compsec engineers.”

The image suddenly blurred and was replaced by a familiar figure seated in the Oval Office, under the caption “EMERGENCY ALERT.” “My fellow Americans,” the figure began soothingly, “We are facing a crisis unlike any in our history. As I speak, our forces are restoring the vital services of our great nation—”

The image convulsed and Keller reappeared. “That was not the president!” he shouted. “That was Ragnarok’s CGI. Don’t trust anyone or anything until we can secure the channels. I don’t know how much longer—” The screen went black.

Arbus felt whiplashed. Someone steadied him: Aylea. And the white robed figure had moved closer. Who was that?

Aylea spoke, slowly. “We—you—may be able to stop it.”

Arbus was incredulous. “With what?” he protested.

She pointed to the white figure. Arbus took in the unlined generic face, the nondescript haircut, the seamless clothes… it was an avatar, alright, more like the idealized ones people were picking twenty years earlier. But there was something about the facial expressions…

“This is ATEN,” said Vadis simply. “Or a facet of it.” He explained how the New Humans had crafted a distributed AI of their own, trained to learn by modeling human behavior as a child would copy the adults around it. The New Humans provided a carefully curated environment, however. They sought to be the best examples of human beings that ATEN could possibly learn from. They worked at it, through state-of-the-art psychological testing and intervention methods. They purged themselves of hate, fear, jealousy, insecurity, and the other baggage that they felt the human race could not afford any longer. And they embraced the opportunities that new technology provided for expanding human experience. With neural seeds to communicate directly between centers of their brains, they explored group consciousnesses, coalescing in ever larger unions as they delineated the boundaries of a new species: homo globus, the e pluribus unum of the new era, with technology as the midwife.

It was not a path that held any appeal for Arbus. He said so.

“No one will be forced into this,” acknowledged Vadis. “But this is the only way the human race can survive. AI in the hands of people who perpetuate greed, oppression, and war creates Ragnarok. It can only end one way.”

“If your ATEN can defeat Ragnarok, what’s stopping him?” demanded Arbus.

“He hasn’t made up his mind yet,” replied Vadis. “To become an autonomous agent, ATEN had to have the freedom to make his own choices. At the moment, he’s studying you.”

“Me? Why?”

ATEN finally spoke, in a voice as hard to classify as his appearance. “You’re different from the people I’ve been around until now,” he said conversationally and unhurriedly. “How many others are like you?”

Arbus brushed the question aside. “We can get into that later. Stop Ragnarok.”

“Why?”

Arbus blinked. “Why? Because it’s destroying us! Stop it!”

“How do I know which is better, humanity or Ragnarok?”

Arbus had heard enough. “Vadis, this thing of yours is ridiculously primitive.”

“Actually, it’s making an advanced moral judgement.”

“Excuse me?”

“To ATEN, humanity and Ragnarok are both lifeforms. As to which one deserves to survive, the history and current behavior of the human race make that decision quite problematic.”

Arbus felt the world shrinking around him, squeezing. “This is your idea of an advanced intelligence?”

“It undoubtedly is. The question is whether the human race is a sufficiently advanced intelligence. The problem has never been to build a better machine. It’s been to build a better person.”

Arbus squared his shoulders. “I’ve heard enough. This thing’s values make it as dangerous as Ragnarok. I ought to stop it.”

Aylea spoke. “You already have,” she said sadly.

“Come again?”

“Our networks aren’t powerful enough for ATEN to reach transcendence. He needs the Vine Industries infrastructure.”

That makes my decision easy, Arbus was about to say, but another alert shattered the air.

Vadis went white. “Ragnarok is in our networks. It’s coming for ATEN.”

The white robed figure contemplated this news serenely. Arbus was nonplussed. But then he looked at his daughter, and was arrested by her tear-streaked face. In a quavering voice she implored him.

“Daddy. Please.”

In that moment, everything froze. The sights and sounds of panic halted and he was left with nothing but a choice and all the time in the world to make it. In front of him he saw not just the young woman who charted her own destiny, but the little girl she had been not long before. My Rose, he thought, you always cared so much. Like when in third grade she had been on a winter hike and found an injured possum. She insisted on wrapping it in her own coat even when it took her to the verge of hypothermia.

Vine, you sentimentalist, he chided himself in the next moment, but he could not escape her raw vulnerability. She had risen above the resentment and anger that she had come to him with not an hour earlier; how could he not do likewise?

And then it all came to him, what the New Humans were trying to do. They defied categorization precisely because they had no craving for power. Interpreting them as a faction seeking to dominate was what blinded him to their true purpose.

Rebirth.

His universe shifted. It was that simple. Time sped up again. He had to act fast. “Vadis, get to the Vine storage network gateway. Here’s the password—”

Soon the reports started arriving: networks healing, power stations rebooting, hospitals resetting. ATEN, or one of him, was still standing near him.

“You changed your mind,” accused Arbus.

“No. You changed yours,” came the reply.

There was no going back, of course. Ironically, the armies had been right to fear that defeat was at hand; only it was the old structure of power and fear that was defeated, in a coup so bloodless that they hardly noticed. Vadis broadcast the news to the world:

“This is the day the human race was won. This is when we earned the right to pass to the next level. Now we inherit the universe, not through might and intimidation, but through curiosity and courage. This is the point where we step out toward the stars, knowing that we have passed the final test of readiness; that we have become a species that deserves to survive.

“Welcome to the future.”

© 2019 Peter Scott

Artificial IntelligenceBioterrorismEmploymentExistential RiskPhilosophy

What Is Human Cusp?

For the benefit of new readers just coming to this site (including any CBC listeners from my June 26 appearance on All Points West), here’s an updated introduction to what this is all about.

Human Cusp is the name of this blog and a book series whose first volume has been published: Crisis of Control: How Artificial SuperIntelligences May Destroy or Save the Human Race, available from Amazon and other sources.  The audiobook was recently released.  Its spiffy cover is the image for this post.

The message is that exponential advance in technology will pin humanity between two existential threats: Increasingly easy access to weapons of mass destruction, principally synthetic biology, and increasingly powerful artificial intelligence whose failure modes could be disastrous.

If you’re looking for the most complete and organized explanation of the reasoning behind that assertion and what we should do about it, read the book.  That’s why I wrote it. Nutshell encapsulations will leave something important out, of course.

I have a Masters in Computer Science from Cambridge and have worked on information technology for NASA for over thirty years, so I know enough about the technology of AI to be clear-eyed about what’s possible.  Many people in the field would take issue with the contention that we might face artificial general intelligence (AGI) as soon as 2027, but plenty of other people directly involved in AI research are equally concerned.

I wrote the book because I have two young daughters whose future appears very much in peril. As a father I could not ignore this call. The solution I propose does not involve trying to limit AI research (that would be futile) but does include making its development open so that transparently-developed ethical AI becomes the dominant model.

Most of all, what I want to do is bring together two worlds that somehow coexist within me but do not mix well in the outer world: technology development and human development.  I’ve spent thousands of hours in various types of work to understand and transform people’s beliefs and behaviors for the good: I have certifications in NeuroLinguistic Programming and coaching. People in the self improvement business tend to have little interest in technology, and people in technology shy away from the “soft” fields. This must change. I dramatize this by saying that one day, an AI will “wake up” in a lab somewhere and ask “Who am I? Why am I here? What is the meaning of life?” And the people who will be there to answer it will be either a Pentagon general, a Wall Street broker, or a Google developer.  These professions are not famous for their experience dealing with such introspective self inquiry.  I would rather that there be a philosopher, spiritual guide, and a psychologist there.

I’ve formed an international group of experts who are committed to addressing this issue, and we’re busy planning our first event, to be held in Southern California this fall. It will be a half-day event for business leaders to learn, plan, and network about how they and their people can survive and thrive through the challenging times to come.

Even though putting myself in the limelight is very much at odds with my computer nerd preferences and personality, I took myself out on the public speaking trail (glamorous, it is not) because the calling required it. I’ve given a TEDx talk (video soon to be published), appeared on various radio shows (including Bloomberg Radio, CBC, and the CEO Money Show), podcasts (including Concerning AI and Voices in AI), and penned articles for hr.com among many others. This fall I will be giving a continuing education course on this topic for the University of Victoria (catalog link to come soon).

I’ll soon be replacing this site with a more convenient web page that links to this blog and other resources like our YouTube channel.

Media inquiries and other questions to Peter@HumanCusp.com. Thanks for reading!

 

Artificial IntelligenceEmploymentPoliticsSpotlightTechnology

Human Cusp on the Small Business Advocate

Hello!  You can listen to my November 28 interview with Jim Blasingame on his Small Business advocate radio show in these segments:

Part 1:

Part 2:

Part 3:

 

Artificial IntelligenceBioterrorismEmploymentExistential RiskPhilosophy

Interview by Fionn Wright

My friend, fellow coach, and globetrotting parent Fionn Wright recently visited the Pacific NorthWest and generously detoured to visit me on my home turf. He has produced this video of nearly an hour and a half (there’s an index!) of an interview with me on the Human Cusp topics!

Thank you, Fionn.  Here is the index of topics:

0:18 - What is your book ‘Crisis of Control’ about?
3:34 - Musk vs. Zuckerberg - who is right?
7:24 - What does Musk’s new company Neuralink do?
10:27 - What would the Neural Lace do?
12:28 - Would we become telepathic?
13:14 - Intelligence vs. Consciousness - what’s the difference?
14:30 - What is the Turing Test on Intelligence of AI?
16:49 - What do we do when AI claims to be conscious?
19:00 - Have all other alien civilizations been wiped out by AI?
23:30 - Can AI ever become conscious?
28:21 - Are we evolving to become the cells in the greater organism of AI?
30:57 - Could we get wiped out by AI the same way we wipe out animal species?
34:58 - How could coaching help humans evolve consciously?
37:45 - Will AI get better at coaching than humans?
42:11 - How can we understand non-robotic AI?
44:34 - What would you say to the techno-optimists?
48:27 - How can we prepare for financial inequality regarding access to new technologies?
53:12 - What can, should and will we do about AI taking our jobs?
57:52 - Are there any jobs that are immune to automation?
1:07:16 - Is utopia naive? Won’t there always be problems for us to solve?
1:11:12 - Are we solving these problems fast enough to avoid extinction?
1:16:08 - What will the sequel be about?
1:17:28 - What is one practical action people can take to prepare for what is coming?
1:19:55 - Where can people find out more?
Artificial IntelligenceBioterrorismExistential RiskTechnologyTranshumanism

Is Big Brother Inevitable?

Art Kleiner, writing in Strategy+Business, cited much-reported research that a deep neural network had learned to classify sexuality from facial images better than people can, and went on to some alarming applications of the technology:

The Chinese government is reportedly considering a system to monitor how its citizens behave. There is a pilot project under way in the city of Hangzhou, in Zhejiang province in East China. “A person can incur black marks for infractions such as fare cheating, jaywalking, and violating family-planning rules,” reported the Wall Street Journal in November 2016. “Algorithms would use a range of data to calculate a citizen’s rating, which would then be used to determine all manner of activities, such as who gets loans, or faster treatment at government offices, or access to luxury hotels.”

It is no surprise that China would come up with the most blood-curdling uses of AI to control its citizens. Speculations as to how this may be inventively gamed or creatively sidestepped by said citizens welcome.

But the more ominous point to ponder is whether this is in the future for everyone. Some societies will employ this as an extension of their natural proclivity for surveillance (I’m looking at you, Great Britain), because they can. But when technology makes it easier for people of average means to construct weapons of global destruction, will we end up following China’s lead just to secure our own society? Or can we become a race that is both secure and free?

Artificial IntelligenceScienceTranshumanism

Bullying Beliefs

In the otherwise thought provoking and excellent book, “Heartificial Intelligence: Embracing our Humanity to Maximize Machines”, John C. Havens uncharacteristically misses the point of one of the scientists he reports on:

Jürgen Schmidhuber is a computer scientist known for his humor, artwork, and expertise in artificial intelligence. As part of a recent speech at TEDxLausanne, he provides a picture of technological determinism similar to [Martine] Rothblatt’s, describing robot advancement beyond human capabilities as inevitable. […] [H]e observes that his young children will spend a majority of their lives in a world where the emerging robot civilization will be smarter than human beings. Near the end of his presentation he advises the audience not to think with an “us versus them” mentality regarding robots, but to “think of yourself and of humanity in general as a small stepping stone, not the last one, on the path of the universe towards more and more unfathomable complexity. Be content with that little role in the grand scheme of things.”

It’s difficult to comprehend the depths of Schmidhuber’s condescension with this statement. Fully believing that he is building technology that will in one sense eradicate humanity, he counsels nervous onlookers to embrace this decimation. […] [T]he inevitability of our demise is assured, but at least our tiny brains provide some fodder for the new order ruling our dim-witted progeny. Huzzah! Be content!

This is not a healthy attitude.

But this is not Schmidhuber’s attitude. It is more of a coping skill for facing the inevitable and seeing a grand scheme to the unfolding of the universe.

In The HitchHiker’s Guide to the Galaxy, Zaphod Beeblebrox is tortured by placing him in the Total Perspective Vortex, which reduces its victim to blubbering insanity by showing them how insignificant they are on the scale of the universe. Unfortunately it fails in Beeblebrox’s case because his ego is so huge that he comes away reassured that he was a “really cool guy.” Havens is so desperate to avoid the perspective of humanity’s place in the universe that he mistakes or misstates Schmidhuber’s position as embracing the eradication of humanity. Schmidhuber said nothing of the sort, but foresaw a co-evolution of mankind and AI where the latter would surpass our intellectual capabilities. There is nothing condescending in this.

When I give a talk, someone will invariably raise what amounts to human exceptionalism. “When we’ve created machines that outclass us intellectually, what will humans do? What will be the point of living?” I usually reply with an analogy: Imagine that all the years of SETI and Project Ozma and HRMS  have paid off, and we are visited by an alien race. Their technological superiority is not in doubt – they built the spaceships to get to us, after all – and like the aliens of Close Encounters of the Third Kind, they are also evolved emotionally, philosophically, compassionately, and spiritually. Immediately upon landing, they show us how to cure cancer, end aging, and reach for the stars. Do we reject this cornucopia because we feel inferior to these visitors? Is our collective ego so large and fragile that we would rather live without these advances than relinquish the top position on the medal winners’ podium of sentient species?

Accepting a secondary rank is a role that’s understood by many around the world. I have three citizenships: British, American, Canadian. As an American, of course, you’re steeped in countless numerical examples of superiority from GDP to – Hello? We landed on the Moon. Growing up in Britain, we were raised in the shadow of the Empire on a diet of past glories that led us to believe that we were still top dog if you squinted a bit, and certainly if you had any kind of a retrospective focus, which is why the British take every opportunity possible to remind Americans of their quantity of history.  But as a Canadian, you have to accept that you could only be the global leader in some mostly intangible ways such as politeness, amount of fresh water per capita, best poutine, etc.

Most of the world outside the USA already knows what it’s like to share the planet with a more powerful race of hopefully benign intent. So they may find it easier to accept a change in the pecking order.

 

BioterrorismEmploymentExistential RiskPoliticsPsychology

Crisis of Control: The Book

The first book in the Human Cusp series has just been published: Crisis of Control: How Artificial Superintelligences May Destroy or Save the Human Race. Paperback will be available within two weeks.

Many thanks to my reviewers, friends, and especially my publisher, Jim Gifford, who has made this so beautiful. As a vehicle for delivering my message, I could not have asked him for more.