All posts by Peter Scott

Peter Scott is a futurist, coach, and technology expert helping people master technological disruption. After receiving a Master’s degree in Computer Science from Cambridge University, he moved to California to work for NASA’s Jet Propulsion Laboratory. Since 2012, Peter has raised awareness about artificial intelligence, educating people on the promise and peril of AI and how to understand what it really is. He has appeared on radio and television, given university courses and numerous appearances in several countries. In February 2020 he spoke to Britain’s House of Lords on the future of AI, and delivered a TEDx talk to a thousand people in British Columbia, Canada. His weekly podcast, “Artificial Intelligence and You” tackles three questions: What is AI? Why will it affect you? How do you and your business survive and thrive through the AI Revolution? In July 2022, his book, also called “Artificial Intelligence and You: What AI Means For Your Life, Your Work, and Your World,” was released. His Next Wave Institute coaches executives how to futureproof their careers and businesses. He lives on Vancouver Island with his wife and two daughters, and is a skydiver and certified scuba diver.

Artificial Intelligence

NLP and NLP

A few years ago I came to realize that I am one of a rather small set of people for whom two different expansions of the initialism NLP vie equally: Natural Language Processing and NeuroLinguistic Programming. For many years these expansions did not overlap in their application or practitioners in the slightest. Natural Language Processing was a niche field of Artificial Intelligence, whose researchers were gamely striving for a seemingly impossible goal of getting machines to understand this hot mess we call language. NeuroLinguistic Programming was a model for discerning and influencing brain patterns through different neurologically-based techniques. (I chose my own clumsy description there rather than paste in a standard definition because I the way I learned NLP from John Grinder, he taught us to own everything we practiced and eschew formulas.)

It’s a small point perhaps, but a significant one, that these fields have now overlapped in one crucial respect: interacting with Large Language Models, like ChatGPT. Since that model emerged blinking into the sunlight of mainstream attention on November 30, 2022, I have told people who are encountering it for the first time to think of it as human. It’s not, of course, but this is a crutch to get people up to speed, especially people who have some computer experience. Because they – we – are used to computers requiring precisely formatted inputs to give us what we want. Even the Google search engine is not conversational, but operates off key terms. So the more technical among us are used to searching with a list of keywords ordered by importance, for example: “Avocado temperature growing optimum.” But an LLM not only understands conversation, it works better with complete sentences and a coherent thread.

So how do you get the best results from LLMs? There’s a lot of posting and even legit research on that, yielding off-the-wall conclusions like, they do better if you offer a tip, or they give more reliable answers if you tell them your job depends on it. But beyond that frippery, the more stable, useful advice boils down to this: Do what would be a best practice in communicating with another human being. Be specific. Avoid ambiguity. Don’t be vague. “Prompt engineering,” in GPT-4’s words, “is an art and science. It involves formulating questions or statements in a manner that effectively guides the AI towards delivering the most accurate, relevant, and contextually appropriate responses. This process is not just about asking questions; it’s about asking the right questions in the right way.”

Of course that is easier said than done, which is why an entire foundation of NeuroLinguistic Programming tackles it. That is called the Meta Model, which gets really specific about, getting specific. Here is an outline of the key components and techniques within the Meta Model:

1. Generalizations

  • Universal Quantifiers: Words like “always,” “never,” “everyone,” “nobody.” These statements are challenged to test their validity.
  • Modal Operators: Words indicating necessity (must, should, have to) or possibility (can, could, will). They express limits and are challenged to explore flexibility.

2. Deletions

  • Simple Deletions: Important information missing in a statement. For example, “I am upset,” without specifying the reason.
  • Comparative Deletions: Comparisons without a reference point. For instance, “This is better,” without explaining what it is better than.
  • Unspecified Verbs: Actions that are not clearly defined, like “She rejected me,” without detailing how the rejection occurred.

3. Distortions

  • Mind Reading: Assuming to know the thoughts or intentions of others without any concrete evidence.
  • Cause-Effect: Implies a cause-effect relationship that might not exist. For example, “You make me angry.”
  • Complex Equivalence: Two unrelated things are treated as if they are the same. For instance, “He’s late, so he doesn’t care about me.”

4. Challenging Techniques

  • Specificity Questions: Asking for specifics to clarify generalizations, deletions, and distortions. For example, “Who exactly are you referring to?” or “What specifically happened?”
  • Counter-Examples: Providing or asking for counter-examples to challenge universal generalizations.
  • Exploring Effects and Motivations: Asking about the effects or purposes of a belief or statement to understand underlying motivations.

5. Outcomes and Objectives

  • Identifying Desired Outcomes: Clarifying what the speaker wants or intends to achieve.
  • Establishing Achievable Goals: Ensuring that the goals or outcomes are specific, measurable, and achievable.

And the better you do all of those things, the better AI prompt writer you will be!

The Meta Model is a powerful tool in NLP for dissecting and understanding language, enabling more effective communication and problem-solving. It helps individuals to break down complex or vague language to reveal underlying thoughts, beliefs, and assumptions. Its benefits and use in communication include:

  • Enhancing Clarity: Using the Meta Model to gain clarity in communication and understanding.
  • Problem-Solving: Applying the model to identify the root cause of issues and find effective solutions.
  • Personal Growth: Utilizing the model for self-reflection and personal development.

Artificial Intelligence

Review: “AI Strategy for Sales and Marketing”

It’s a pleasure to review Katie King’s new book, AI Strategy for Sales and Marketing. (Katie and I have worked together, but not on this book.) She has grown incredibly as an author since her previous book, Using Artificial Intelligence in Marketing: there’s more content, more depth, more examples, more case studies, more graphics and even better typography.

Katie breaks down AI along every conceivable axis that it would interest anyone in Sales or Marketing: The benefits, risks, issues, ethics, even the international political balance of power. She gives step-by-step advice on how to use AI, detailing the different ways it can be used, including technologies such as Emotion AI. There’s not just one bibliography for the book, but one for each chapter. She covers industries from telecomms to automotive and business units from HR to marketing.

I can’t imagine anyone in marketing not wanting this book on their shelf. If you’re not up on AI today you’ll be out of a job tomorrow, and this is the way to stay ahead of the game.

Artificial Intelligence

Meet the Real Trenchard More

In the summer of 1956, several scientists and mathematicians gathered for an extended impromptu conference in Dartmouth, New Hampshire, to define a bold, expansive new term: artificial intelligence. Coined by John McCarthy in a paper the previous year, the words are now part of everyday fabric. As different wunderkinds rotated through this Woodstock of cybernetics, they defined an ambitious agenda:

”…that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.” 

They modestly estimated that these goals could be rapidly met:

We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.

Okay, it’s easy to mock. But that’s not why I’m here. In several books, web pages, and the 2020 Netflix documentary Coded Bias, the following picture is used to identify the moves and shakers from that turning point in history:

Except that the man at bottom right is not Trenchard More. Take another look: this man is not even dressed in clothing from the same century as the others. Who is he? Hold that thought.

Let’s look at a famous photograph from that summer of computer love:

Left to right: Oliver Selfridge, Nathaniel Rochester, Ray Solomonoff, Marvin Minsky, Trenchard More, John McCarthy, Claude Shannon

This photograph of early genius nerds cavorting on the grass is legendary, although it is seldom fully captioned. After some research, I put a name to every face. There, third from right, is the real Trenchard More.

Who, then, is the staid gentleman so obviously out of place in the mug-shot line-up?

To get the answer, it is only necessary to run a Google image search on the name “Trenchard More,” and up pops this picture:

The signature (and caption) is of one Louis Trenchard More, instructor in physics at Johns Hopkins University… died 16 January 1944, 12 years before the picture on the lawn. The false Trenchard More picture may not have even been taken in the 1900s. Its place in the line-up grid is the result of lazy Googling. For a long time, actual Trenchard More pictures were much harder to find than any of Louis T-M. Recently, the results have improved.

Here is the picture used in Trenchard More’s obituary following his death on October 4, 2019 at the splendid age of 89:

Trenchard More Obituary (1930 - 2019) - Keene, NH - The Hingham Journal

How better to end than quoting that obituary:

Trenchard was born in Boston and raised in Hingham, where his family had a longstanding connection to Old Ship Church. He received his bachelors degree in mathematics from Harvard and a doctorate in electrical engineering from M.I.T. before teaching at Yale University in the 1960s. He was an early contributor to the field of artificial intelligence and developed Array Theory, the mathematics of nested interactive arrays. His work took him to IBM, and internationally to Queens University in Canada and the Technical University of Denmark. Trenchard married Kath- arine Grinnell Biddle of Milton in 1958. They took their family up Mount Katahdin, spent summers on the Elizabeth Islands, and went skiing in the Green Mountains, in addition to cultural forays to New York City. When grand- children arrived, Trenchard gathered his family for many summers on the coast of Maine. His love of natural beauty was echoed in his appreciation for the music of Stravinsky. Trenchard is survived by his beloved wife Kate, son Paul More and his wife Elizabeth of Concord, son Grinnell More and his wife Linda of Jacksonville, daughter Libby Pratt and her husband Neil of London, and six grandchildren.

Artificial Intelligencefuturism

The Control Problem, Re-re-re-visited

Like moths to a flame, we cannot resist the siren call of the Control Problem, the name AI philosophers give to the question of how or whether we will be able to control AI as it gets more and more intelligent. (Not for nothing did I name my book Crisis of Control.) A reporter contacted me to ask for comment on a newly published paper from the Max Planck Institute. The paper makes a mathematical case for the impossibility of controlling a superintelligence through an extension of the famous (in computer science, at least) Halting Problem. This is a proof given to first-year computer science students to blow their minds about how to think about programs as data, and works by establishing a contradiction: Suppose a function exists that can tell whether a program (whose source code is passed as input to the function) is going to halt. Now create a program that calls that function with itself as input, and if the function returns true, loop forever, i.e., do not halt. This program halts if it doesn’t halt, and doesn’t halt if it does halt. Smoke comes out of the computer: Paradox Alert! Therefore, no such function can exist.

You might intuit that a lot of hay can made from that line of reasoning, and this is exactly the road that MPI went down. Their paper says, “Suppose we had a function that could tell whether a program (an AI) would harm humans. Now imagine a program that calls that function on itself and if the result is false, cause harm to humans.” Boom: Paradox Alert. Therefore it is impossible in general to tell whether a given program will cause harm to humans.

That’s a narrow conclusion to hang a large amount of philosophical weight on, but the paper’s authors don’t mind going there. They invoke the AI boxing problem and the story of the Monkey’s Paw to make sure we are aware of the consequences of not being able to guarantee control of AI. This commentary was picked up by the reporter who contacted me. You can see the resulting article in Lifewire.

I supplied more input than they had room to quote, of course. I was quoted fairly, and not taken out of context, but for you, here’s the full text of what I gave the reporter:

The paper you cite extends a venerable computer science proof to the theoretical conclusion that it is impossible to prove that a sufficiently advanced computer program couldn’t harm humanity. That doesn’t mean they’ve proved that advanced AIs will harm humanity! Just that we have no assurances that they won’t. Much as when voting for a candidate for President, we have no guarantee that he won’t foment an insurrection at the end of his term.

Controlling AI is currently a quality assurance problem: AI is a branch of computer science and its products are software; unpredictability in its behavior is what we call bugs. There is a long-established discipline for testing software to find them. As AI becomes increasingly complex, its operational modes become so varied as to defy comprehensive testing. Will a self-driving vehicle start learning about the psychology of children because it needs to predict whether they will jump in front of the car? Will we need to teach that car the physics of flammable liquids so it can decide whether it can convey its passengers to safety if it encounters an overturned tanker in the road? The range of possible behavior approaches infinity. We cannot expect to keep testing manageable by limiting the possible knowledge and behavior of an AI because it is precisely that unbounded knowledge that will allow it to do what we want.

We cannot, ultimately, ensure the controllability of AI any more than we can ensure that of our children. We raise them right and hope for the best; so far they have not destroyed the world. To raise them well we need a better understanding of ethics; if we can’t clean  our own house, what code are we supposed to ask AI to follow?

The problems in controlling AI right now are those of managing any other complex software; if we have a bug, what will it do, send grandma an electric bill for a million dollars? Or will image tagging software decide that African Americans are gorillas, as Google Photos did? These bugs are not the kind of uncontrollability that the paper’s authors or your readers are interested in. They want to know if and when AI will develop agency, or purposes that are clearly at odds with what its creators intended. That is known as the value alignment problem in artificial intelligence, and it does not require the AI become self-aware to be a problem. Nick Bostrom’s hypothetical paper-clip maximizer AI does not have to be conscious to wipe out the human race. But it does require a level of sophistication we do not currently know how to create. Most experts think we are decades away from that ability; your readers need not panic. We do, however, think that it is worth addressing the issue now, because the solution may also take decades to prepare.

What do you think about whether we could or should control superintelligent AI? Comment on this thread or use our contact form and maybe I can answer your question on my podcast.

“Machines take me by surprise with great frequency. This is largely because I do not do sufficient calculation to decide what to expect them to do.”

— Alan Turing

Artificial IntelligenceTranshumanism

Book Review – “Becoming a Butterfly”

Recently on my AI and You podcast, my guest was Tony Czarnecki, author of “Becoming a Butterfly,” available from Amazon.

This is a prescient work that deserves your attention. It is the third in the “POSTHUMANS” series, whose prequels are “Federate to Survive!” – enumerating our existential threats – and “Democracy for a Human Federation” – how we might survive those threats. This work completes the trilogy by asking who humanity might become after a successful survival, building from Tony’s earlier book “Who Could Save Humanity From Superintelligence?.”

Tony is thinking on a grand scale, as you might expect from a key speaker at the London Futurists. He thinks out to the logical conclusion of the trends we are beginning to experience now, and foresees – as many of us do – a rendezvous with destiny that goes either very well or very badly for us all. The path to the favorable outcome requires us to assume control over our own evolution, in Tony’s view, and so he lays out how that ought to happen. These sweeping prescriptions, such as “Build a planetary civilization,” may appear thoroughly unrealistic; Tony acknowledges this, but is unafraid to make the case, and readers of this blog will know that it is one I share. We can hope at least for a Hundredth Monkey Effect. Tony repeatedly outlines how surviving the near-term existential threats will propel us to a condition where we will be resilient against all of them.

Tony delineates the exponential factors driving us towards this nexus and then describes the attributes of a planetary civilization: operating at level 1 on the Kardashev scale, able to harness the total energy available to the entire planet. (We’re at around 0.75 on this scale at the moment.) To get further, though, we need to be more resilient against the kind of threats that accelerate as our population and technology do, and here the author uses current experience in the pandemic to illustrate his point while giving a numeric treatment of threat probabilities.

Tony is happy to make specific suggestions as to what the government should do to achieve that resiliency; the problem is that those suggestions, while naturally pitched at the government of the author’s homeland of the United Kingdom, need to be picked up on a global scale. One government acting alone cannot expect these measures to gain traction any more than one government could make a Paris Climate Accord. (With the possible exception of the United States and its power to wield the global reserve currency like a baseball bat.)

Czarnecki then tackles the subject of superintelligence: What drives the evolution of AI, what might the risks of superintelligence be, can it be conscious, and how should we curate it? This is where he connects the dots with transhumanism. This, of course, is a touchy subject. Many people are petrified at even the most optimistic scenarios of how humanity might evolve in partnership with AI, and futurists owe it to this audience to provide the most reassurance we can.

Czarnecki refers extensively to the need to federate, which was laid out in one of his earlier books. His examples are Europe-based and North American audiences would find the book more relatable with some that were taken from their experience. In particular, Americans in general are somewhat allergic to the United Nations and Czarnecki’s proposals should clearly demarcate for them the limits of power he suggests they exercise. He recognizes this by suggesting that the USA may be among the countries not participating in the world government he proposes, but this strikes me as leaving out an essential ally in plans of this scope. I’ll leave you to discover in the book which body he settles on identifying as the best candidate for leading the world into the new federation. (And Star Trek fans can hardly object to plans for creating a Federation, no?)

There is much more, including discussions of potential pitfalls, economic realities, and likely scenarios for armed conflict along the way to what Tony calls the Novacene – the new era. The treatments of these sweeping paths are undertaken with a thoroughness that suggests the book’s application as a textbook in the right kind of course – perhaps The History of the Future. Listeners of my podcast know that my thoughts tend in the direction of education already.

In summary, Becoming a Butterfly is a serious work, to be taken seriously. Don’t try to skim through it in a few sessions; it demands your engagement and will reward it accordingly.

Artificial IntelligencefuturismTechnology

024 – The Biggest Question About AGI

https://www.podbean.com/media/share/pb-ujcbv-f271cd

This and all episodes at: https://aiandyou.net/ .

 

We tackle the most important question about Artificial General Intelligence – When Will It Happen? Everyone really wants to know, but no one has a clue.  Estimates range from 5 to 500 years. So why talk about it? I talk about how this question was raised in a presentation and what it means to me and all of us.

We might not be able to get a date, but we’ll explore why it’s such a hard question and see what useful questions we can get out of it.

All that and our usual look at today’s headlines in AI.

Transcript and URLs referenced at HumanCusp Blog.

AGI

 

Artificial IntelligencefuturismTechnology

023 – Guest: Pamela McCorduck, AI Historian, part 2

https://www.podbean.com/media/share/pb-2umm5-f259e4

This and all episodes at: https://aiandyou.net/ .

 

Every Johnson should have a Boswell, and the entire artificial intelligence field has Pamela McCorduck as its scribe. Part historian, part humorist, part raconteuse, her books romp through the history and characters of AI as both authoritative record and belles-lettres. Machines Who Think (1979, 2003) and her recent sequel This Could Be Important (2019) help understand the who, what, and why of where AI has come from.

In the second half of this interview, we talk about changes in the experience of women in computing, C. P. Snow’s “Two Cultures”, and the interaction between AI and the humanities, along with more tales of its founding fathers.

All that and our usual look at today’s headlines in AI.

Transcript and URLs referenced at HumanCusp Blog.

Pamela McCorduck

 

Artificial IntelligencefuturismTechnology

022 – Guest: Pamela McCorduck, AI Historian

https://www.podbean.com/media/share/pb-5a3ez-f1ed50

This and all episodes at: https://aiandyou.net/ .

 

Every Johnson should have a Boswell, and the entire artificial intelligence field has Pamela McCorduck as its scribe. Part historian, part humorist, part raconteuse, her books romp through the history and characters of AI as both authoritative record and belles-lettres. Machines Who Think (1979, 2003) and her recent sequel This Could Be Important (2019) help understand the who, what, and why of where AI has come from.

In this interview, we talk about the boom-bust cycle of AI, why the founders of the field thought they could crack the problem of thought in a summer, and the changes in thinking about intelligence since the early days.

All that and our usual look at today’s headlines in AI.

Transcript and URLs referenced at HumanCusp Blog.

Pamela McCorduck

 

Artificial IntelligencefuturismTechnology

021 – Guest: David Wood, Futurist, part 2

https://www.podbean.com/media/share/pb-y2bxm-f0cd40

This and all episodes at: https://aiandyou.net/ .

 

How do you drive a community of futurists? David Wood was one of the pioneers of the smartphone industry, co-founding Symbian in 1998. He is now an independent futurist consultant, speaker and writer. As Chair of the London Futurists, he has hosted over 200 public discussions about technoprogressive topics. He is the author or lead editor of nine books, including Smartphones for All, The Abolition of Aging, Transcending Politics, and Sustainable Superabundance.

In the second half of our interview, we talk about OpenAI, economic fairness with the AI dividend, how building an ecosystem with feedback cycles addresses disruption, and how you can participate in shaping the future.

Transcript and URLs referenced at HumanCusp Blog.

David Wood

 

Artificial IntelligencefuturismTechnology

020 – Guest: David Wood, Futurist

https://www.podbean.com/media/share/pb-u7qnm-f03b1e

This and all episodes at: https://aiandyou.net/ .

 

How do you drive a community of futurists? David Wood was one of the pioneers of the smartphone industry, co-founding Symbian in 1998. He is now an independent futurist consultant, speaker and writer. As Chair of the London Futurists, he has hosted over 200 public discussions about technoprogressive topics. He is the author or lead editor of nine books, including Smartphones for All, The Abolition of Aging, Transcending Politics, and Sustainable Superabundance.

In part 1 of our interview, we talk about David’s singularitarian philosophy, the evolution and impact of Deep Learning, and his SingularityNET infrastructure for AI interoperation.

Transcript and URLs referenced at HumanCusp Blog.

David Wood