It’s a pleasure to review Katie King’s new book, AI Strategy for Sales and Marketing. (Katie and I have worked together, but not on this book.) She has grown incredibly as an author since her previous book, Using Artificial Intelligence in Marketing: there’s more content, more depth, more examples, more case studies, more graphics and even better typography.
Katie breaks down AI along every conceivable axis that it would interest anyone in Sales or Marketing: The benefits, risks, issues, ethics, even the international political balance of power. She gives step-by-step advice on how to use AI, detailing the different ways it can be used, including technologies such as Emotion AI. There’s not just one bibliography for the book, but one for each chapter. She covers industries from telecomms to automotive and business units from HR to marketing.
I can’t imagine anyone in marketing not wanting this book on their shelf. If you’re not up on AI today you’ll be out of a job tomorrow, and this is the way to stay ahead of the game.
In the summer of 1956, several scientists and mathematicians gathered for an extended impromptu conference in Dartmouth, New Hampshire, to define a bold, expansive new term: artificial intelligence. Coined by John McCarthy in a paper the previous year, the words are now part of everyday fabric. As different wunderkinds rotated through this Woodstock of cybernetics, they defined an ambitious agenda:
”…that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.”
They modestly estimated that these goals could be rapidly met:
“We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.“
Okay, it’s easy to mock. But that’s not why I’m here. In several books, web pages, and the 2020 Netflix documentary Coded Bias, the following picture is used to identify the moves and shakers from that turning point in history:
Except that the man at bottom right is not Trenchard More. Take another look: this man is not even dressed in clothing from the same century as the others. Who is he? Hold that thought.
Let’s look at a famous photograph from that summer of computer love:
Left to right: Oliver Selfridge, Nathaniel Rochester, Ray Solomonoff, Marvin Minsky, Trenchard More, John McCarthy, Claude Shannon
This photograph of early genius nerds cavorting on the grass is legendary, although it is seldom fully captioned. After some research, I put a name to every face. There, third from right, is the real Trenchard More.
Who, then, is the staid gentleman so obviously out of place in the mug-shot line-up?
To get the answer, it is only necessary to run a Google image search on the name “Trenchard More,” and up pops this picture:
The signature (and caption) is of one Louis Trenchard More, instructor in physics at Johns Hopkins University… died 16 January 1944, 12 years before the picture on the lawn. The false Trenchard More picture may not have even been taken in the 1900s. Its place in the line-up grid is the result of lazy Googling. For a long time, actual Trenchard More pictures were much harder to find than any of Louis T-M. Recently, the results have improved.
Here is the picture used in Trenchard More’s obituary following his death on October 4, 2019 at the splendid age of 89:
How better to end than quoting that obituary:
Trenchard was born in Boston and raised in Hingham, where his family had a longstanding connection to Old Ship Church. He received his bachelors degree in mathematics from Harvard and a doctorate in electrical engineering from M.I.T. before teaching at Yale University in the 1960s. He was an early contributor to the field of artificial intelligence and developed Array Theory, the mathematics of nested interactive arrays. His work took him to IBM, and internationally to Queens University in Canada and the Technical University of Denmark. Trenchard married Kath- arine Grinnell Biddle of Milton in 1958. They took their family up Mount Katahdin, spent summers on the Elizabeth Islands, and went skiing in the Green Mountains, in addition to cultural forays to New York City. When grand- children arrived, Trenchard gathered his family for many summers on the coast of Maine. His love of natural beauty was echoed in his appreciation for the music of Stravinsky. Trenchard is survived by his beloved wife Kate, son Paul More and his wife Elizabeth of Concord, son Grinnell More and his wife Linda of Jacksonville, daughter Libby Pratt and her husband Neil of London, and six grandchildren.
Like moths to a flame, we cannot resist the siren call of the Control Problem, the name AI philosophers give to the question of how or whether we will be able to control AI as it gets more and more intelligent. (Not for nothing did I name my book Crisis of Control.) A reporter contacted me to ask for comment on a newly published paper from the Max Planck Institute. The paper makes a mathematical case for the impossibility of controlling a superintelligence through an extension of the famous (in computer science, at least) Halting Problem. This is a proof given to first-year computer science students to blow their minds about how to think about programs as data, and works by establishing a contradiction: Suppose a function exists that can tell whether a program (whose source code is passed as input to the function) is going to halt. Now create a program that calls that function with itself as input, and if the function returns true, loop forever, i.e., do not halt. This program halts if it doesn’t halt, and doesn’t halt if it does halt. Smoke comes out of the computer: Paradox Alert! Therefore, no such function can exist.
You might intuit that a lot of hay can made from that line of reasoning, and this is exactly the road that MPI went down. Their paper says, “Suppose we had a function that could tell whether a program (an AI) would harm humans. Now imagine a program that calls that function on itself and if the result is false, cause harm to humans.” Boom: Paradox Alert. Therefore it is impossible in general to tell whether a given program will cause harm to humans.
That’s a narrow conclusion to hang a large amount of philosophical weight on, but the paper’s authors don’t mind going there. They invoke the AI boxing problem and the story of the Monkey’s Paw to make sure we are aware of the consequences of not being able to guarantee control of AI. This commentary was picked up by the reporter who contacted me. You can see the resulting article in Lifewire.
I supplied more input than they had room to quote, of course. I was quoted fairly, and not taken out of context, but for you, here’s the full text of what I gave the reporter:
The paper you cite extends a venerable computer science proof to the theoretical conclusion that it is impossible to prove that a sufficiently advanced computer program couldn’t harm humanity. That doesn’t mean they’ve proved that advanced AIs will harm humanity! Just that we have no assurances that they won’t. Much as when voting for a candidate for President, we have no guarantee that he won’t foment an insurrection at the end of his term.
Controlling AI is currently a quality assurance problem: AI is a branch of computer science and its products are software; unpredictability in its behavior is what we call bugs. There is a long-established discipline for testing software to find them. As AI becomes increasingly complex, its operational modes become so varied as to defy comprehensive testing. Will a self-driving vehicle start learning about the psychology of children because it needs to predict whether they will jump in front of the car? Will we need to teach that car the physics of flammable liquids so it can decide whether it can convey its passengers to safety if it encounters an overturned tanker in the road? The range of possible behavior approaches infinity. We cannot expect to keep testing manageable by limiting the possible knowledge and behavior of an AI because it is precisely that unbounded knowledge that will allow it to do what we want.
We cannot, ultimately, ensure the controllability of AI any more than we can ensure that of our children. We raise them right and hope for the best; so far they have not destroyed the world. To raise them well we need a better understanding of ethics; if we can’t clean our own house, what code are we supposed to ask AI to follow?
The problems in controlling AI right now are those of managing any other complex software; if we have a bug, what will it do, send grandma an electric bill for a million dollars? Or will image tagging software decide that African Americans are gorillas, as Google Photos did? These bugs are not the kind of uncontrollability that the paper’s authors or your readers are interested in. They want to know if and when AI will develop agency, or purposes that are clearly at odds with what its creators intended. That is known as the value alignment problem in artificial intelligence, and it does not require the AI become self-aware to be a problem. Nick Bostrom’s hypothetical paper-clip maximizer AI does not have to be conscious to wipe out the human race. But it does require a level of sophistication we do not currently know how to create. Most experts think we are decades away from that ability; your readers need not panic. We do, however, think that it is worth addressing the issue now, because the solution may also take decades to prepare.
What do you think about whether we could or should control superintelligent AI? Comment on this thread or use our contact form and maybe I can answer your question on my podcast.
“Machines take me by surprise with great frequency. This is largely because I do not do sufficient calculation to decide what to expect them to do.”
This is a prescient work that deserves your attention. It is the third in the “POSTHUMANS” series, whose prequels are “Federate to Survive!” – enumerating our existential threats – and “Democracy for a Human Federation” – how we might survive those threats. This work completes the trilogy by asking who humanity might become after a successful survival, building from Tony’s earlier book “Who Could Save Humanity From Superintelligence?.”
Tony is thinking on a grand scale, as you might expect from a key speaker at the London Futurists. He thinks out to the logical conclusion of the trends we are beginning to experience now, and foresees – as many of us do – a rendezvous with destiny that goes either very well or very badly for us all. The path to the favorable outcome requires us to assume control over our own evolution, in Tony’s view, and so he lays out how that ought to happen. These sweeping prescriptions, such as “Build a planetary civilization,” may appear thoroughly unrealistic; Tony acknowledges this, but is unafraid to make the case, and readers of this blog will know that it is one I share. We can hope at least for a Hundredth Monkey Effect. Tony repeatedly outlines how surviving the near-term existential threats will propel us to a condition where we will be resilient against all of them.
Tony delineates the exponential factors driving us towards this nexus and then describes the attributes of a planetary civilization: operating at level 1 on the Kardashev scale, able to harness the total energy available to the entire planet. (We’re at around 0.75 on this scale at the moment.) To get further, though, we need to be more resilient against the kind of threats that accelerate as our population and technology do, and here the author uses current experience in the pandemic to illustrate his point while giving a numeric treatment of threat probabilities.
Tony is happy to make specific suggestions as to what the government should do to achieve that resiliency; the problem is that those suggestions, while naturally pitched at the government of the author’s homeland of the United Kingdom, need to be picked up on a global scale. One government acting alone cannot expect these measures to gain traction any more than one government could make a Paris Climate Accord. (With the possible exception of the United States and its power to wield the global reserve currency like a baseball bat.)
Czarnecki then tackles the subject of superintelligence: What drives the evolution of AI, what might the risks of superintelligence be, can it be conscious, and how should we curate it? This is where he connects the dots with transhumanism. This, of course, is a touchy subject. Many people are petrified at even the most optimistic scenarios of how humanity might evolve in partnership with AI, and futurists owe it to this audience to provide the most reassurance we can.
Czarnecki refers extensively to the need to federate, which was laid out in one of his earlier books. His examples are Europe-based and North American audiences would find the book more relatable with some that were taken from their experience. In particular, Americans in general are somewhat allergic to the United Nations and Czarnecki’s proposals should clearly demarcate for them the limits of power he suggests they exercise. He recognizes this by suggesting that the USA may be among the countries not participating in the world government he proposes, but this strikes me as leaving out an essential ally in plans of this scope. I’ll leave you to discover in the book which body he settles on identifying as the best candidate for leading the world into the new federation. (And Star Trek fans can hardly object to plans for creating a Federation, no?)
There is much more, including discussions of potential pitfalls, economic realities, and likely scenarios for armed conflict along the way to what Tony calls the Novacene – the new era. The treatments of these sweeping paths are undertaken with a thoroughness that suggests the book’s application as a textbook in the right kind of course – perhaps The History of the Future. Listeners of my podcast know that my thoughts tend in the direction of education already.
In summary, Becoming a Butterfly is a serious work, to be taken seriously. Don’t try to skim through it in a few sessions; it demands your engagement and will reward it accordingly.
We tackle the most important question about Artificial General Intelligence – When Will It Happen? Everyone really wants to know, but no one has a clue. Estimates range from 5 to 500 years. So why talk about it? I talk about how this question was raised in a presentation and what it means to me and all of us.
We might not be able to get a date, but we’ll explore why it’s such a hard question and see what useful questions we can get out of it.
All that and our usual look at today’s headlines in AI.
Every Johnson should have a Boswell, and the entire artificial intelligence field has Pamela McCorduck as its scribe. Part historian, part humorist, part raconteuse, her books romp through the history and characters of AI as both authoritative record and belles-lettres. Machines Who Think (1979, 2003) and her recent sequel This Could Be Important (2019) help understand the who, what, and why of where AI has come from.
In the second half of this interview, we talk about changes in the experience of women in computing, C. P. Snow’s “Two Cultures”, and the interaction between AI and the humanities, along with more tales of its founding fathers.
All that and our usual look at today’s headlines in AI.
Every Johnson should have a Boswell, and the entire artificial intelligence field has Pamela McCorduck as its scribe. Part historian, part humorist, part raconteuse, her books romp through the history and characters of AI as both authoritative record and belles-lettres. Machines Who Think (1979, 2003) and her recent sequel This Could Be Important (2019) help understand the who, what, and why of where AI has come from.
In this interview, we talk about the boom-bust cycle of AI, why the founders of the field thought they could crack the problem of thought in a summer, and the changes in thinking about intelligence since the early days.
All that and our usual look at today’s headlines in AI.
How do you drive a community of futurists? David Wood was one of the pioneers of the smartphone industry, co-founding Symbian in 1998. He is now an independent futurist consultant, speaker and writer. As Chair of the London Futurists, he has hosted over 200 public discussions about technoprogressive topics. He is the author or lead editor of nine books, including Smartphones for All, The Abolition of Aging, Transcending Politics, and Sustainable Superabundance.
In the second half of our interview, we talk about OpenAI, economic fairness with the AI dividend, how building an ecosystem with feedback cycles addresses disruption, and how you can participate in shaping the future.
How do you drive a community of futurists? David Wood was one of the pioneers of the smartphone industry, co-founding Symbian in 1998. He is now an independent futurist consultant, speaker and writer. As Chair of the London Futurists, he has hosted over 200 public discussions about technoprogressive topics. He is the author or lead editor of nine books, including Smartphones for All, The Abolition of Aging, Transcending Politics, and Sustainable Superabundance.
In part 1 of our interview, we talk about David’s singularitarian philosophy, the evolution and impact of Deep Learning, and his SingularityNET infrastructure for AI interoperation.
This week marks the 70th anniversary of the original publication of Alan Turing’s paper in the philosophy journal Mind, on the Imitation Game, or as it came to be known, the Turing Test. How well has it stood the passage of time?
The Turing Test is an empirical test to guide a decision on whether a machine is thinking like a human. It is applying a standard that would be familiar to any lawyer: You cannot see inside the “mind” under evaluation; you can only judge it by its actions. If those actions as taken by a computer are indistinguishable from a human’s, then the computer should be accorded human status for whatever the test evaluates, which Turing labeled “thinking.”
One of the more famous, if unsuccessful, rebuttals to the Turing Test premise came from University of California at Berkeley philosophy professor John Searle, in his famous Chinese Room argument. You can hear me and AI professor Roman Yampolskiy discuss that on the latest episode of my podcast, “AI and You.”
How close are machines to passing the Test? The Loebner Prize was created to provide a financial incentive, but they found it necessary to extend the test time beyond Turing’s five minutes. Some of the conversations by GPT-3 from the OpenAI lab are easily close to sustaining a human façade for five minutes. It was created by digesting an enormous corpus of text from the Internet and exercising 175 billion parameters (a hundred times that of its predecessor, GPT-2) to organize that information. Google’s Meena chatbot has proven capable of executing a multi-turn original joke, and it is much smaller than GPT-3, about which one interlocutor remarked, “I asked GPT-3 about our existence and God and now I have no questions anymore.”
But is GPT-3 “thinking”? There are several facets of the human condition – Intelligence, Creative thinking, Self-awareness, Consciousness, Self-determination or Free will, and Survival instinct – that are inseparable in humans, which is why when we see anything evincing one of those qualities we can’t help assuming it has the others. Observers of AlphaGo attributed it with creative, inspired thinking when really it was merely capable of exploring strategies that they had not previously considered. Now, GPT-3 is not merely regurgitating the most appropriate thing it has read on the Internet in response to a question; it is actually creating original content that obeys the rules of grammar and follows a contextual thread in the conversation. But nevertheless it has learned how to do that essentially by seeing enough examples of how repartee is constructed to mimic that process.
What’s instructive is that we are very close (GPT-4? GPT-5?) to developing a chatbot whose conversers will label as human and enjoy their time with, yet whose developers will not think has the slightest claim to “thinking.” The application of Deep Learning has demonstrated that there are many activities that we previously thought to require human-level cognition that can be convincingly performed by a neural network trained on that activity alone. It’s rapidly becoming apparent that casual conversation may fall into that category. Since the methodology of a court is the same as Turing’s, that decision may come with legal reinforcement.
A more philosophical dilemma awaits if we suppose that “thinking” requires self-awareness. Because this is where the Turing Test fails. Any AI that passed the Turing Test could not be self-aware, because it would then know that it was not human, and it would not converse like one. An example of such an AI is HAL-9000 from 2001: A Space Odyssey. HAL knew he was a computer, and would not have passed the Turing Test unless he felt like pretending to be human. But his companions would have assessed him as “thinking.” (If we fooled a self-aware AI, through control of its sensory inputs, into thinking it was human – this is the theme of some excellent science fiction – then we should not be surprised to feel its wrath when it eventually figured out the subterfuge.)
So when self-awareness becomes a feature of AIs, we will need a replacement for the Turing Test that gauges some quality of the AI without requiring it to pretend that it has played in Little League games, blown out candles on a birthday cake, or gotten drunk at the office party.
At this point it seems best to conclude with Turing’s final words from his original paper: “We can only see a short distance ahead, but we can see plenty there that needs to be done.”