Anyone in a field of expertise can agree that the press doesn’t cover them as accurately as they would like. Sometimes that falls within the limits of what a layperson can reasonably be expected to learn about a field in a thirty minute interview; sometimes it’s rank sensationalism.  Zachary Lipton, a PhD candidate at UCSD, takes aim at the press coverage of AI in The AI Misinformation Epidemic, and its followup, but uses a blunderbuss and takes out a lot of innocent bystanders.

He rants against media sensationalism and misinformation but provides no examples
to debate other than a Vanity Fair article about Elon Musk. He goes after Kurzweil in particular but doesn’t say what specifically he disagrees with Kurzweil on. He says that Kurzweil’s date
for the Singularity is made up but Kurzweil has published his reasoning
and Lipton doesn’t say what data or assertions in that reasoning he
disagrees with. He says the Singularity is a nebulous concept, which is
bound to be true in the minds of many laypeople who have heard the term
but he references Kurzweil immediately adjacent to that assertion and
yet doesn’t say what about Kurzweil’s vision is wrong or unfounded, dismissing it instead as “religion,” which apparently means he doesn’t have to be specific.

He
says that there are many people making pronouncements in the field who
are unqualified to do so but doesn’t name anyone aside from Kurzweil and
Vanity Fair, nor does he say what credentials should be required to
qualify someone. Kurzweil, having invented the optical character reader
and music synthesizer, is not qualified?

He takes no
discernible stand on the issue of AI safety yet his scornful tone is
readily interpreted as poo-poohing not just utopianism but alarmism as well. That
puts him at odds with Stephen Hawking, whose academic credentials are
beyond question. For some reason, Nick Bostrom also comes in for attack in the comments, for no specified reason but with the implication that since he makes AI dangers digestible for the masses he is therefore sensationalist.  Perhaps that is why I reacted so much to this article.

There is of course hype, but it is hard to tell exactly where. In 2015 an article claiming that a computer would beat the world champion Go player within a year would be roundly dismissed as hype.  In 2009 any article asserting that self-driving vehicles would be ready for public roads within five years would have been overreaching. Kurzweil has a good track record of predictions, they just tend to be behind schedule. The point is, if an assertion about an existential threat turns out to be well founded but we ignore it because existential threats have always appeared over-dramatized, then it will be too late to say, “Oops, missed one.” We have to take this stuff seriously.

Posted by Peter Scott

Peter Scott is a futurist, coach, and technology expert helping people master technological disruption. After receiving a Master’s degree in Computer Science from Cambridge University, he moved to California to work for NASA’s Jet Propulsion Laboratory. Since 2012, Peter has raised awareness about artificial intelligence, educating people on the promise and peril of AI and how to understand what it really is. He has appeared on radio and television, given university courses and numerous appearances in several countries. In February 2020 he spoke to Britain’s House of Lords on the future of AI, and delivered a TEDx talk to a thousand people in British Columbia, Canada. His weekly podcast, “Artificial Intelligence and You” tackles three questions: What is AI? Why will it affect you? How do you and your business survive and thrive through the AI Revolution? In July 2022, his book, also called “Artificial Intelligence and You: What AI Means For Your Life, Your Work, and Your World,” was released. His Next Wave Institute coaches executives how to futureproof their careers and businesses. He lives on Vancouver Island with his wife and two daughters, and is a skydiver and certified scuba diver.

3 Comments

  1. Hi Peter,

    Your portrayal of “the AI misinformation epidemic” is inaccurate. I understand that you may have taken the post personally, as you appear to be a constituent of the influencer industry. But to avoid an ad hominem discussion, I’ll focus on the concrete issues:

    “Anyone in a field of expertise can agree that the press doesn’t cover them as accurately as they would like.”

    This understates the present problem. Yes, the press gets many things wrong. But not all of them have the potential to impact society so profoundly as developments in machine learning. The fact that people are getting information from news sources that cannot distinguish between cheerleaders (like you, on this site) and sober, technically grounded perspectives on machine learning should alarm us all. It’s a similar situation to the financial meltdown, where CNBC has bank representatives and stock-market cheerleaders like Jim Cramer posing as news sources.

    I have the fortune of being an expertise in multiple areas. For longer than I’ve been a machine learning researcher, I played the saxophone (jazz music) for many years. Jazz music is consistently misrepresented in the press – the situation is so severe that most people cannot distinguish between elevator pop music (Kenny G) and the long, sophisticated tradition of improvised music that term ‘jazz’ denotes. However, while this sucks for musicians, the broader world can survive it. The economy won’t collapse, and most peoples lives will go on. The same can be said of the press’ sensational coverage of theoretical and experimental physics (string theory, “Higgs boson / “God particle” )

    With AI, as with politics and finance, it’s especially important that we hold journalism to a higher bar. Generally the press does a more reasonable job of holding even footing with politicians. At the very least, writers at the serious publications are experts themselves and conjure some reasonable amount of skepticism.

    “a blunderbuss and takes out a lot of innocent bystanders”

    I do not take out any innocent bystanders. I address the Vanity Fair puff piece, which is truly out of line, and I address Ray Kurzweil who, as a public figure, can be held accountable for the misinformation that he’s spread and for confusing his conviction to live forever for a scientific axiom.

    Also note that my article is an introductory post, a thesis introducing a series of posts. The most recent post went up yesterday addressing a Guardian puff piece that grossly misrepresents the state of robotics and artificial “minds”: https://www.theguardian.com/news/2017/apr/07/meet-erica-the-worlds-most-autonomous-android. Many more specifics will follow.

    “Kurzweil has published his reasoning”

    The reasoning, as noted in my post, in the comments, and again in the follow-up is flawed. “Published”, in this case, just means he wrote it down on a piece of paper. The bible is also published. Kurzweil, and the evangelical materials on the Singularity website refer to exponential growth. But what precisely is growing “exponentially”? Technology. What does this mean? It means precisely nothing. How do you measure it? Well, they used to look to Moore’s law. When that ceased to hold, they considered computer per dollar. When that ceases to hold, they will invent another measure, so long as the fetishized word “exponential”

    “He says that there are many people making pronouncements in the field who
    are unqualified to do so ”

    I suggest you pay attention to the follow-up posts. For obvious reasons, I’d rather only publish complete critiques. My intent is not to conduct character assassinations. One of the nice things about the scientific writing style is that we focus on work, not on individuals. While sometimes we have to breach this principle in journalistic writing (as by addressing public figures, like Kurzweil), this should not be done gratuitously. You should note your hypocrisy in a) accusing me of not naming names b) accusing me of taking out innocent bystanders.

    I’m not squarely at odds with Stephen Hawking and not necessarily with Nick Bostrom. Futurism isn’t the problem; misinformation is. Not all futurists are so sloppy in their thinking. Bostrom and Hawking are considerably more measured thinkers and I will give their statements and critiques commensurately measured replies. And I actually agree with much/most of what I’ve heard Stuart Russel say.

    “The point is, if an assertion about an existential threat turns out to be well founded but we ignore it”

    Note that I don’t even object out of hand to the idea that AI might *one day* in some form comprise an existential threat. This is an orthogonal discussion with whether people are being misinformed on the specifics by charlatans.

    You should read more carefully and try next time to offer a less sloppy critique.

    Like

    Reply

    1. Thank you for your comments. There is, certainly, much we can agree on – the Guardian piece you just commented on is not alone in characterizing a dumb machine as having consciousness (although that hasn’t prevented some people from relating to them anyway: http://money.cnn.com/mostly-human/i-love-you-bot/). This particular trope is hardly harmful though, considering that you can date it back as far as ELIZA. Funny you should mention the Higgs Boson, since there was lurid speculation when the Large Hadron Collider came online that it could create a mini black hole that would swallow the earth, surely at least as dangerous a meme.

      Kurzweil is necessarily offering a prediction that is more qualitative than quantitative. Just because Moore’s original Law was about transistor density doesn’t invalidate pointing out that it fits within a longer curve that is more meaningful for future predictions of instructions executed per dollar. Kurzweil estimated that 2045 would be the point where a device with the computational capacity of a human brain would cost about the price of a personal computer. Specific useful arguments we could have at this point would include: (a) what is the computational capacity of the human brain, (b) what trend assumptions result in that figure, and (c) how valid are those assumptions?

      Kurzweil further connects that time with a “Singularity” (although that term in that context goes back to Irving Good in the ’60s), and again we could have useful arguments like (a) what does “effectively infinite” mean as human impact, (b) what assumptions are inherent in that, (c) do those assumptions conflict with observable facts or extrapolations? The fact that there is much digging to do to lay this bare doesn’t mean that “there is no there there.” Stephen Hawking has made even more broad assertions and yet it is a hallmark of great intellects that they make logical leaps faster than other people; we should not dismiss those assertions just because we don’t see the connections.

      By comparison, global climate change (while still disputed in partisan domains) was asserted long before the numerical evidence was overwhelming. But when the consequences predicted are (a) so impactful and (b) take so long to mitigate that we must start immediately, we do not have the luxury of waiting for statistical certainty to arrive. So it is with the Singularity – and on this, I diverge from Kurzweil on his prediction that this will be completely utopian, I think there is a far greater chance of it being catastrophic. When those consequences are so far-ranging for humanity, we should not dismiss them simply because we find the chain of reasoning incomplete. Especially when we would need to begin – somehow – shaping society to prepare for those changes immediately. And in that respect, even pieces like the Guardian may accidentally hit the right target by beginning to ask the right broader questions even with the wrong provocation. (Yes, if someone buys one of those products thinking that they’re going to get a slave/servant/companion, they’re going to be disappointed, but that’s hardly serious fallout.)

      Commentary that is aimed for the mass market in the middle of the IQ scale is necessarily going to be at a different level of complexity from that which is useful to academics, but we need that commentary when the entire population is at risk. For instance, two studies predict nearly 50% of all job functions in North America will be lost to automation. (The time period was not part of the studies but the authors told me they were thinking 10-20 years.) That’s a conversation that needs to be magnified immediately.

      There is great value to be had in discussing the questions raised by Kurzweil’s predictions. Some kinds of statement don’t merely predict but can also affect the future; clearly Kurzweil is trying to do both. I know it doesn’t translate well to today’s ML developers who are connecting different neural networks and see no place in their code for an evil overlord to emerge, but some things are worth preparing for long in advance.

      Like

      Reply

  2. “That
    puts him at odds with Stephen Hawking, whose academic credentials are
    beyond question.”
    See that’s your problem right there. You think that someone with expertise in a totally unrelated field is somehow an expert in AI.

    Like

    Reply

Leave a comment