Peter Scott’s résumé reads like a Monty Python punchline: half business coach, half information technology specialist, half teacher, three-quarters daddy. After receiving a master’s degree in Computer Science from Cambridge University, he has worked for NASA’s Jet Propulsion Laboratory as an employee and contractor for over thirty years, helping advance our exploration of the Solar System. Over the years, he branched out into writing technical books and training.
Yet at the same time, he developed a parallel career in “soft” fields of human development, getting certifications in NeuroLinguistic Programming from founder John Grinder and in coaching from the International Coaching Federation. In 2007 he co-created a convention honoring the centennial of the birth of author Robert Heinlein, attended by over 700 science fiction fans and aerospace experts, a unique fusion of the visionary with the concrete. Bridging these disparate worlds positions him to envisage a delicate solution to the existential crises facing humanity. He lives in the Pacific Northwest with his wife and two daughters, writing the Human Cusp blog on dealing with exponential change.
It’s a pleasure to review Katie King’s new book, AI Strategy for Sales and Marketing. (Katie and I have worked together, but not on this book.) She has grown incredibly as an author since her previous book, Using Artificial Intelligence in Marketing: there’s more content, more depth, more examples, more case studies, more graphics and even better typography.
Katie breaks down AI along every conceivable axis that it would interest anyone in Sales or Marketing: The benefits, risks, issues, ethics, even the international political balance of power. She gives step-by-step advice on how to use AI, detailing the different ways it can be used, including technologies such as Emotion AI. There’s not just one bibliography for the book, but one for each chapter. She covers industries from telecomms to automotive and business units from HR to marketing.
I can’t imagine anyone in marketing not wanting this book on their shelf. If you’re not up on AI today you’ll be out of a job tomorrow, and this is the way to stay ahead of the game.
In the summer of 1956, several scientists and mathematicians gathered for an extended impromptu conference in Dartmouth, New Hampshire, to define a bold, expansive new term: artificial intelligence. Coined by John McCarthy in a paper the previous year, the words are now part of everyday fabric. As different wunderkinds rotated through this Woodstock of cybernetics, they defined an ambitious agenda:
”…that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.”
They modestly estimated that these goals could be rapidly met:
“We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.“
Okay, it’s easy to mock. But that’s not why I’m here. In several books, web pages, and the 2020 Netflix documentary Coded Bias, the following picture is used to identify the moves and shakers from that turning point in history:
Except that the man at bottom right is not Trenchard More. Take another look: this man is not even dressed in clothing from the same century as the others. Who is he? Hold that thought.
Let’s look at a famous photograph from that summer of computer love:
Left to right: Oliver Selfridge, Nathaniel Rochester, Ray Solomonoff, Marvin Minsky, Trenchard More, John McCarthy, Claude Shannon
This photograph of early genius nerds cavorting on the grass is legendary, although it is seldom fully captioned. After some research, I put a name to every face. There, third from right, is the real Trenchard More.
Who, then, is the staid gentleman so obviously out of place in the mug-shot line-up?
To get the answer, it is only necessary to run a Google image search on the name “Trenchard More,” and up pops this picture:
The signature (and caption) is of one Louis Trenchard More, instructor in physics at Johns Hopkins University… died 16 January 1944, 12 years before the picture on the lawn. The false Trenchard More picture may not have even been taken in the 1900s. Its place in the line-up grid is the result of lazy Googling. For a long time, actual Trenchard More pictures were much harder to find than any of Louis T-M. Recently, the results have improved.
Here is the picture used in Trenchard More’s obituary following his death on October 4, 2019 at the splendid age of 89:
How better to end than quoting that obituary:
Trenchard was born in Boston and raised in Hingham, where his family had a longstanding connection to Old Ship Church. He received his bachelors degree in mathematics from Harvard and a doctorate in electrical engineering from M.I.T. before teaching at Yale University in the 1960s. He was an early contributor to the field of artificial intelligence and developed Array Theory, the mathematics of nested interactive arrays. His work took him to IBM, and internationally to Queens University in Canada and the Technical University of Denmark. Trenchard married Kath- arine Grinnell Biddle of Milton in 1958. They took their family up Mount Katahdin, spent summers on the Elizabeth Islands, and went skiing in the Green Mountains, in addition to cultural forays to New York City. When grand- children arrived, Trenchard gathered his family for many summers on the coast of Maine. His love of natural beauty was echoed in his appreciation for the music of Stravinsky. Trenchard is survived by his beloved wife Kate, son Paul More and his wife Elizabeth of Concord, son Grinnell More and his wife Linda of Jacksonville, daughter Libby Pratt and her husband Neil of London, and six grandchildren.
Like moths to a flame, we cannot resist the siren call of the Control Problem, the name AI philosophers give to the question of how or whether we will be able to control AI as it gets more and more intelligent. (Not for nothing did I name my book Crisis of Control.) A reporter contacted me to ask for comment on a newly published paper from the Max Planck Institute. The paper makes a mathematical case for the impossibility of controlling a superintelligence through an extension of the famous (in computer science, at least) Halting Problem. This is a proof given to first-year computer science students to blow their minds about how to think about programs as data, and works by establishing a contradiction: Suppose a function exists that can tell whether a program (whose source code is passed as input to the function) is going to halt. Now create a program that calls that function with itself as input, and if the function returns true, loop forever, i.e., do not halt. This program halts if it doesn’t halt, and doesn’t halt if it does halt. Smoke comes out of the computer: Paradox Alert! Therefore, no such function can exist.
You might intuit that a lot of hay can made from that line of reasoning, and this is exactly the road that MPI went down. Their paper says, “Suppose we had a function that could tell whether a program (an AI) would harm humans. Now imagine a program that calls that function on itself and if the result is false, cause harm to humans.” Boom: Paradox Alert. Therefore it is impossible in general to tell whether a given program will cause harm to humans.
That’s a narrow conclusion to hang a large amount of philosophical weight on, but the paper’s authors don’t mind going there. They invoke the AI boxing problem and the story of the Monkey’s Paw to make sure we are aware of the consequences of not being able to guarantee control of AI. This commentary was picked up by the reporter who contacted me. You can see the resulting article in Lifewire.
I supplied more input than they had room to quote, of course. I was quoted fairly, and not taken out of context, but for you, here’s the full text of what I gave the reporter:
The paper you cite extends a venerable computer science proof to the theoretical conclusion that it is impossible to prove that a sufficiently advanced computer program couldn’t harm humanity. That doesn’t mean they’ve proved that advanced AIs will harm humanity! Just that we have no assurances that they won’t. Much as when voting for a candidate for President, we have no guarantee that he won’t foment an insurrection at the end of his term.
Controlling AI is currently a quality assurance problem: AI is a branch of computer science and its products are software; unpredictability in its behavior is what we call bugs. There is a long-established discipline for testing software to find them. As AI becomes increasingly complex, its operational modes become so varied as to defy comprehensive testing. Will a self-driving vehicle start learning about the psychology of children because it needs to predict whether they will jump in front of the car? Will we need to teach that car the physics of flammable liquids so it can decide whether it can convey its passengers to safety if it encounters an overturned tanker in the road? The range of possible behavior approaches infinity. We cannot expect to keep testing manageable by limiting the possible knowledge and behavior of an AI because it is precisely that unbounded knowledge that will allow it to do what we want.
We cannot, ultimately, ensure the controllability of AI any more than we can ensure that of our children. We raise them right and hope for the best; so far they have not destroyed the world. To raise them well we need a better understanding of ethics; if we can’t clean our own house, what code are we supposed to ask AI to follow?
The problems in controlling AI right now are those of managing any other complex software; if we have a bug, what will it do, send grandma an electric bill for a million dollars? Or will image tagging software decide that African Americans are gorillas, as Google Photos did? These bugs are not the kind of uncontrollability that the paper’s authors or your readers are interested in. They want to know if and when AI will develop agency, or purposes that are clearly at odds with what its creators intended. That is known as the value alignment problem in artificial intelligence, and it does not require the AI become self-aware to be a problem. Nick Bostrom’s hypothetical paper-clip maximizer AI does not have to be conscious to wipe out the human race. But it does require a level of sophistication we do not currently know how to create. Most experts think we are decades away from that ability; your readers need not panic. We do, however, think that it is worth addressing the issue now, because the solution may also take decades to prepare.
What do you think about whether we could or should control superintelligent AI? Comment on this thread or use our contact form and maybe I can answer your question on my podcast.
“Machines take me by surprise with great frequency. This is largely because I do not do sufficient calculation to decide what to expect them to do.”
This is a prescient work that deserves your attention. It is the third in the “POSTHUMANS” series, whose prequels are “Federate to Survive!” – enumerating our existential threats – and “Democracy for a Human Federation” – how we might survive those threats. This work completes the trilogy by asking who humanity might become after a successful survival, building from Tony’s earlier book “Who Could Save Humanity From Superintelligence?.”
Tony is thinking on a grand scale, as you might expect from a key speaker at the London Futurists. He thinks out to the logical conclusion of the trends we are beginning to experience now, and foresees – as many of us do – a rendezvous with destiny that goes either very well or very badly for us all. The path to the favorable outcome requires us to assume control over our own evolution, in Tony’s view, and so he lays out how that ought to happen. These sweeping prescriptions, such as “Build a planetary civilization,” may appear thoroughly unrealistic; Tony acknowledges this, but is unafraid to make the case, and readers of this blog will know that it is one I share. We can hope at least for a Hundredth Monkey Effect. Tony repeatedly outlines how surviving the near-term existential threats will propel us to a condition where we will be resilient against all of them.
Tony delineates the exponential factors driving us towards this nexus and then describes the attributes of a planetary civilization: operating at level 1 on the Kardashev scale, able to harness the total energy available to the entire planet. (We’re at around 0.75 on this scale at the moment.) To get further, though, we need to be more resilient against the kind of threats that accelerate as our population and technology do, and here the author uses current experience in the pandemic to illustrate his point while giving a numeric treatment of threat probabilities.
Tony is happy to make specific suggestions as to what the government should do to achieve that resiliency; the problem is that those suggestions, while naturally pitched at the government of the author’s homeland of the United Kingdom, need to be picked up on a global scale. One government acting alone cannot expect these measures to gain traction any more than one government could make a Paris Climate Accord. (With the possible exception of the United States and its power to wield the global reserve currency like a baseball bat.)
Czarnecki then tackles the subject of superintelligence: What drives the evolution of AI, what might the risks of superintelligence be, can it be conscious, and how should we curate it? This is where he connects the dots with transhumanism. This, of course, is a touchy subject. Many people are petrified at even the most optimistic scenarios of how humanity might evolve in partnership with AI, and futurists owe it to this audience to provide the most reassurance we can.
Czarnecki refers extensively to the need to federate, which was laid out in one of his earlier books. His examples are Europe-based and North American audiences would find the book more relatable with some that were taken from their experience. In particular, Americans in general are somewhat allergic to the United Nations and Czarnecki’s proposals should clearly demarcate for them the limits of power he suggests they exercise. He recognizes this by suggesting that the USA may be among the countries not participating in the world government he proposes, but this strikes me as leaving out an essential ally in plans of this scope. I’ll leave you to discover in the book which body he settles on identifying as the best candidate for leading the world into the new federation. (And Star Trek fans can hardly object to plans for creating a Federation, no?)
There is much more, including discussions of potential pitfalls, economic realities, and likely scenarios for armed conflict along the way to what Tony calls the Novacene – the new era. The treatments of these sweeping paths are undertaken with a thoroughness that suggests the book’s application as a textbook in the right kind of course – perhaps The History of the Future. Listeners of my podcast know that my thoughts tend in the direction of education already.
In summary, Becoming a Butterfly is a serious work, to be taken seriously. Don’t try to skim through it in a few sessions; it demands your engagement and will reward it accordingly.
We tackle the most important question about Artificial General Intelligence – When Will It Happen? Everyone really wants to know, but no one has a clue. Estimates range from 5 to 500 years. So why talk about it? I talk about how this question was raised in a presentation and what it means to me and all of us.
We might not be able to get a date, but we’ll explore why it’s such a hard question and see what useful questions we can get out of it.
All that and our usual look at today’s headlines in AI.
Every Johnson should have a Boswell, and the entire artificial intelligence field has Pamela McCorduck as its scribe. Part historian, part humorist, part raconteuse, her books romp through the history and characters of AI as both authoritative record and belles-lettres. Machines Who Think (1979, 2003) and her recent sequel This Could Be Important (2019) help understand the who, what, and why of where AI has come from.
In the second half of this interview, we talk about changes in the experience of women in computing, C. P. Snow’s “Two Cultures”, and the interaction between AI and the humanities, along with more tales of its founding fathers.
All that and our usual look at today’s headlines in AI.
Every Johnson should have a Boswell, and the entire artificial intelligence field has Pamela McCorduck as its scribe. Part historian, part humorist, part raconteuse, her books romp through the history and characters of AI as both authoritative record and belles-lettres. Machines Who Think (1979, 2003) and her recent sequel This Could Be Important (2019) help understand the who, what, and why of where AI has come from.
In this interview, we talk about the boom-bust cycle of AI, why the founders of the field thought they could crack the problem of thought in a summer, and the changes in thinking about intelligence since the early days.
All that and our usual look at today’s headlines in AI.
How do you drive a community of futurists? David Wood was one of the pioneers of the smartphone industry, co-founding Symbian in 1998. He is now an independent futurist consultant, speaker and writer. As Chair of the London Futurists, he has hosted over 200 public discussions about technoprogressive topics. He is the author or lead editor of nine books, including Smartphones for All, The Abolition of Aging, Transcending Politics, and Sustainable Superabundance.
In the second half of our interview, we talk about OpenAI, economic fairness with the AI dividend, how building an ecosystem with feedback cycles addresses disruption, and how you can participate in shaping the future.
How do you drive a community of futurists? David Wood was one of the pioneers of the smartphone industry, co-founding Symbian in 1998. He is now an independent futurist consultant, speaker and writer. As Chair of the London Futurists, he has hosted over 200 public discussions about technoprogressive topics. He is the author or lead editor of nine books, including Smartphones for All, The Abolition of Aging, Transcending Politics, and Sustainable Superabundance.
In part 1 of our interview, we talk about David’s singularitarian philosophy, the evolution and impact of Deep Learning, and his SingularityNET infrastructure for AI interoperation.
What does it look like to be on the front lines of academic research into making future AI safe? It looks like Roman Yampolskiy, professor at the University of Louisville, Kentucky, director of their Cyber Security lab and key contributor to the field of AI Safety. With over 100 papers and books on AI, Roman is recognized as an AI expert the world over.
In this second part of our interview, we talk about his latest paper: a comprehensive analysis of the Control Problem, the central issue of AI safety: How do we ensure future AI remains under our control? We also discuss the current limitations of AI and how AI may evolve.