All posts by Peter Scott

Peter Scott’s résumé reads like a Monty Python punchline: half business coach, half information technology specialist, half teacher, three-quarters daddy. After receiving a master’s degree in Computer Science from Cambridge University, he has worked for NASA’s Jet Propulsion Laboratory as an employee and contractor for over thirty years, helping advance our exploration of the Solar System. Over the years, he branched out into writing technical books and training. Yet at the same time, he developed a parallel career in “soft” fields of human development, getting certifications in NeuroLinguistic Programming from founder John Grinder and in coaching from the International Coaching Federation. In 2007 he co-created a convention honoring the centennial of the birth of author Robert Heinlein, attended by over 700 science fiction fans and aerospace experts, a unique fusion of the visionary with the concrete. Bridging these disparate worlds positions him to envisage a delicate solution to the existential crises facing humanity. He lives in the Pacific Northwest with his wife and two daughters, writing the Human Cusp blog on dealing with exponential change.

Artificial IntelligenceTranshumanism

Book Review – “Becoming a Butterfly”

Recently on my AI and You podcast, my guest was Tony Czarnecki, author of “Becoming a Butterfly,” available from Amazon.

This is a prescient work that deserves your attention. It is the third in the “POSTHUMANS” series, whose prequels are “Federate to Survive!” – enumerating our existential threats – and “Democracy for a Human Federation” – how we might survive those threats. This work completes the trilogy by asking who humanity might become after a successful survival, building from Tony’s earlier book “Who Could Save Humanity From Superintelligence?.”

Tony is thinking on a grand scale, as you might expect from a key speaker at the London Futurists. He thinks out to the logical conclusion of the trends we are beginning to experience now, and foresees – as many of us do – a rendezvous with destiny that goes either very well or very badly for us all. The path to the favorable outcome requires us to assume control over our own evolution, in Tony’s view, and so he lays out how that ought to happen. These sweeping prescriptions, such as “Build a planetary civilization,” may appear thoroughly unrealistic; Tony acknowledges this, but is unafraid to make the case, and readers of this blog will know that it is one I share. We can hope at least for a Hundredth Monkey Effect. Tony repeatedly outlines how surviving the near-term existential threats will propel us to a condition where we will be resilient against all of them.

Tony delineates the exponential factors driving us towards this nexus and then describes the attributes of a planetary civilization: operating at level 1 on the Kardashev scale, able to harness the total energy available to the entire planet. (We’re at around 0.75 on this scale at the moment.) To get further, though, we need to be more resilient against the kind of threats that accelerate as our population and technology do, and here the author uses current experience in the pandemic to illustrate his point while giving a numeric treatment of threat probabilities.

Tony is happy to make specific suggestions as to what the government should do to achieve that resiliency; the problem is that those suggestions, while naturally pitched at the government of the author’s homeland of the United Kingdom, need to be picked up on a global scale. One government acting alone cannot expect these measures to gain traction any more than one government could make a Paris Climate Accord. (With the possible exception of the United States and its power to wield the global reserve currency like a baseball bat.)

Czarnecki then tackles the subject of superintelligence: What drives the evolution of AI, what might the risks of superintelligence be, can it be conscious, and how should we curate it? This is where he connects the dots with transhumanism. This, of course, is a touchy subject. Many people are petrified at even the most optimistic scenarios of how humanity might evolve in partnership with AI, and futurists owe it to this audience to provide the most reassurance we can.

Czarnecki refers extensively to the need to federate, which was laid out in one of his earlier books. His examples are Europe-based and North American audiences would find the book more relatable with some that were taken from their experience. In particular, Americans in general are somewhat allergic to the United Nations and Czarnecki’s proposals should clearly demarcate for them the limits of power he suggests they exercise. He recognizes this by suggesting that the USA may be among the countries not participating in the world government he proposes, but this strikes me as leaving out an essential ally in plans of this scope. I’ll leave you to discover in the book which body he settles on identifying as the best candidate for leading the world into the new federation. (And Star Trek fans can hardly object to plans for creating a Federation, no?)

There is much more, including discussions of potential pitfalls, economic realities, and likely scenarios for armed conflict along the way to what Tony calls the Novacene – the new era. The treatments of these sweeping paths are undertaken with a thoroughness that suggests the book’s application as a textbook in the right kind of course – perhaps The History of the Future. Listeners of my podcast know that my thoughts tend in the direction of education already.

In summary, Becoming a Butterfly is a serious work, to be taken seriously. Don’t try to skim through it in a few sessions; it demands your engagement and will reward it accordingly.

Artificial IntelligencefuturismTechnology

024 – The Biggest Question About AGI

https://www.podbean.com/media/share/pb-ujcbv-f271cd

This and all episodes at: https://aiandyou.net/ .

 

We tackle the most important question about Artificial General Intelligence – When Will It Happen? Everyone really wants to know, but no one has a clue.  Estimates range from 5 to 500 years. So why talk about it? I talk about how this question was raised in a presentation and what it means to me and all of us.

We might not be able to get a date, but we’ll explore why it’s such a hard question and see what useful questions we can get out of it.

All that and our usual look at today’s headlines in AI.

Transcript and URLs referenced at HumanCusp Blog.

AGI

 

Artificial IntelligencefuturismTechnology

023 – Guest: Pamela McCorduck, AI Historian, part 2

https://www.podbean.com/media/share/pb-2umm5-f259e4

This and all episodes at: https://aiandyou.net/ .

 

Every Johnson should have a Boswell, and the entire artificial intelligence field has Pamela McCorduck as its scribe. Part historian, part humorist, part raconteuse, her books romp through the history and characters of AI as both authoritative record and belles-lettres. Machines Who Think (1979, 2003) and her recent sequel This Could Be Important (2019) help understand the who, what, and why of where AI has come from.

In the second half of this interview, we talk about changes in the experience of women in computing, C. P. Snow’s “Two Cultures”, and the interaction between AI and the humanities, along with more tales of its founding fathers.

All that and our usual look at today’s headlines in AI.

Transcript and URLs referenced at HumanCusp Blog.

Pamela McCorduck

 

Artificial IntelligencefuturismTechnology

022 – Guest: Pamela McCorduck, AI Historian

https://www.podbean.com/media/share/pb-5a3ez-f1ed50

This and all episodes at: https://aiandyou.net/ .

 

Every Johnson should have a Boswell, and the entire artificial intelligence field has Pamela McCorduck as its scribe. Part historian, part humorist, part raconteuse, her books romp through the history and characters of AI as both authoritative record and belles-lettres. Machines Who Think (1979, 2003) and her recent sequel This Could Be Important (2019) help understand the who, what, and why of where AI has come from.

In this interview, we talk about the boom-bust cycle of AI, why the founders of the field thought they could crack the problem of thought in a summer, and the changes in thinking about intelligence since the early days.

All that and our usual look at today’s headlines in AI.

Transcript and URLs referenced at HumanCusp Blog.

Pamela McCorduck

 

Artificial IntelligencefuturismTechnology

021 – Guest: David Wood, Futurist, part 2

https://www.podbean.com/media/share/pb-y2bxm-f0cd40

This and all episodes at: https://aiandyou.net/ .

 

How do you drive a community of futurists? David Wood was one of the pioneers of the smartphone industry, co-founding Symbian in 1998. He is now an independent futurist consultant, speaker and writer. As Chair of the London Futurists, he has hosted over 200 public discussions about technoprogressive topics. He is the author or lead editor of nine books, including Smartphones for All, The Abolition of Aging, Transcending Politics, and Sustainable Superabundance.

In the second half of our interview, we talk about OpenAI, economic fairness with the AI dividend, how building an ecosystem with feedback cycles addresses disruption, and how you can participate in shaping the future.

Transcript and URLs referenced at HumanCusp Blog.

David Wood

 

Artificial IntelligencefuturismTechnology

020 – Guest: David Wood, Futurist

https://www.podbean.com/media/share/pb-u7qnm-f03b1e

This and all episodes at: https://aiandyou.net/ .

 

How do you drive a community of futurists? David Wood was one of the pioneers of the smartphone industry, co-founding Symbian in 1998. He is now an independent futurist consultant, speaker and writer. As Chair of the London Futurists, he has hosted over 200 public discussions about technoprogressive topics. He is the author or lead editor of nine books, including Smartphones for All, The Abolition of Aging, Transcending Politics, and Sustainable Superabundance.

In part 1 of our interview, we talk about David’s singularitarian philosophy, the evolution and impact of Deep Learning, and his SingularityNET infrastructure for AI interoperation.

Transcript and URLs referenced at HumanCusp Blog.

David Wood

 

podcast

017 – Guest: Roman Yampolskiy, Professor of AI Safety, part 2

https://www.podbean.com/media/share/pb-35e4b-ee7a74

This and all episodes at: http://aiandyou.net/ .

 

What does it look like to be on the front lines of academic research into making future AI safe? It looks like Roman Yampolskiy, professor at the University of Louisville, Kentucky, director of their Cyber Security lab and key contributor to the field of AI Safety. With over 100 papers and books on AI, Roman is recognized as an AI expert the world over.

In this second part of our interview, we talk about his latest paper: a comprehensive analysis of the Control Problem, the central issue of AI safety: How do we ensure future AI remains under our control? We also discuss the current limitations of AI and how AI may evolve.

Transcript and URLs referenced at HumanCusp Blog.

Roman Yampolskiy

 

Artificial IntelligencePhilosophy

Turing, Tested by Time

This week marks the 70th anniversary of the original publication of Alan Turing’s paper in the philosophy journal Mind, on the Imitation Game, or as it came to be known, the Turing Test. How well has it stood the passage of time?

The Turing Test is an empirical test to guide a decision on whether a machine is thinking like a human.  It is applying a standard that would be familiar to any lawyer: You cannot see inside the “mind” under evaluation; you can only judge it by its actions.  If those actions as taken by a computer are indistinguishable from a human’s, then the computer should be accorded human status for whatever the test evaluates, which Turing labeled “thinking.”

One of the more famous, if unsuccessful, rebuttals to the Turing Test premise came from University of California at Berkeley philosophy professor John Searle, in his famous Chinese Room argument. You can hear me and AI professor Roman Yampolskiy discuss that on the latest episode of my podcast, “AI and You.”

How close are machines to passing the Test?  The Loebner Prize was created to provide a financial incentive, but they found it necessary to extend the test time beyond Turing’s five minutes.  Some of the conversations by GPT-3 from the OpenAI lab are easily close to sustaining a human façade for five minutes.  It was created by digesting an enormous corpus of text from the Internet and exercising 175 billion parameters (a hundred times that of its predecessor, GPT-2) to organize that information.  Google’s Meena chatbot has proven capable of executing a multi-turn original joke, and it is much smaller than GPT-3, about which one interlocutor remarked, “I asked GPT-3 about our existence and God and now I have no questions anymore.”

But is GPT-3 “thinking”?  There are several facets of the human condition – Intelligence, Creative thinking, Self-awareness, Consciousness, Self-determination or Free will, and Survival instinct – that are inseparable in humans, which is why when we see anything evincing one of those qualities we can’t help assuming it has the others.  Observers of AlphaGo attributed it with creative, inspired thinking when really it was merely capable of exploring strategies that they had not previously considered.  Now, GPT-3 is not merely regurgitating the most appropriate thing it has read on the Internet in response to a question; it is actually creating original content that obeys the rules of grammar and follows a contextual thread in the conversation.  But nevertheless it has learned how to do that essentially by seeing enough examples of how repartee is constructed to mimic that process.

What’s instructive is that we are very close (GPT-4? GPT-5?) to developing a chatbot whose conversers will label as human and enjoy their time with, yet whose developers will not think has the slightest claim to “thinking.”  The application of Deep Learning has demonstrated that there are many activities that we previously thought to require human-level cognition that can be convincingly performed by a neural network trained on that activity alone.  It’s rapidly becoming apparent that casual conversation may fall into that category.  Since the methodology of a court is the same as Turing’s, that decision may come with legal reinforcement.

A more philosophical dilemma awaits if we suppose that “thinking” requires self-awareness.  Because this is where the Turing Test fails.  Any AI that passed the Turing Test could not be self-aware, because it would then know that it was not human, and it would not converse like one.  An example of such an AI is HAL-9000 from 2001: A Space Odyssey.  HAL knew he was a computer, and would not have passed the Turing Test unless he felt like pretending to be human.  But his companions would have assessed him as “thinking.”  (If we fooled a self-aware AI, through control of its sensory inputs, into thinking it was human – this is the theme of some excellent science fiction – then we should not be surprised to feel its wrath when it eventually figured out the subterfuge.)

So when self-awareness becomes a feature of AIs, we will need a replacement for the Turing Test that gauges some quality of the AI without requiring it to pretend that it has played in Little League games, blown out candles on a birthday cake, or gotten drunk at the office party. 

At this point it seems best to conclude with Turing’s final words from his original paper: “We can only see a short distance ahead, but we can see plenty there that needs to be done.”

podcast

016 – Guest: Roman Yampolskiy, Professor of AI Safety

https://www.podbean.com/media/share/pb-z6488-ed8c9a

This and all episodes at: http://aiandyou.net/ .

 

What does it look like to be on the front lines of academic research into making future AI safe? It looks like Roman Yampolskiy, professor at the University of Louisville, Kentucky, director of their Cyber Security lab and key contributor to the field of AI Safety. With over 100 papers and books on AI, Roman is recognized as an AI expert the world over.

In this first part of our interview, we talk about his latest paper, a comprehensive analysis of the Control Problem, the central issue of AI safety: How do we ensure future AI remains under our control?

All this and our usual look at today’s AI headlines.

Transcript and URLs referenced at HumanCusp Blog.

Roman Yampolskiy

 

podcast

015 – Guest: Karina Vold, Professor of Philosophy, part 2

https://www.podbean.com/media/share/pb-ndcrt-ec250b

This and all episodes at: http://aiandyou.net/ .

 

How will we keep our current and future artificial intelligences ethically aligned with human preferences? Who do we need to help with that? Answer: A philosopher of the use of emerging cognitive technologies. Karina Vold is Assistant Professor at the University of Toronto’s Institute for the History and Philosophy of Science and Technology and has recently come from the Leverhulme Centre for the Future of Intelligence. She thinks, writes, and speaks about the the evolution of #AI from a philosopher’s perspective. In the second half of this interview we learn about value alignment, the Trolley Problem, and just what those institutes do about AI.  Ever wondered whether you could make a living as a philosopher? Karina will tell you how she has.

Transcript and URLs referenced at HumanCusp Blog.

Karina Vold