podcast

017 – Guest: Roman Yampolskiy, Professor of AI Safety, part 2

https://www.podbean.com/media/share/pb-35e4b-ee7a74

This and all episodes at: http://aiandyou.net/ .

 

What does it look like to be on the front lines of academic research into making future AI safe? It looks like Roman Yampolskiy, professor at the University of Louisville, Kentucky, director of their Cyber Security lab and key contributor to the field of AI Safety. With over 100 papers and books on AI, Roman is recognized as an AI expert the world over.

In this second part of our interview, we talk about his latest paper: a comprehensive analysis of the Control Problem, the central issue of AI safety: How do we ensure future AI remains under our control? We also discuss the current limitations of AI and how AI may evolve.

Transcript and URLs referenced at HumanCusp Blog.

Roman Yampolskiy

 

Artificial IntelligencePhilosophy

Turing, Tested by Time

This week marks the 70th anniversary of the original publication of Alan Turing’s paper in the philosophy journal Mind, on the Imitation Game, or as it came to be known, the Turing Test. How well has it stood the passage of time?

The Turing Test is an empirical test to guide a decision on whether a machine is thinking like a human.  It is applying a standard that would be familiar to any lawyer: You cannot see inside the “mind” under evaluation; you can only judge it by its actions.  If those actions as taken by a computer are indistinguishable from a human’s, then the computer should be accorded human status for whatever the test evaluates, which Turing labeled “thinking.”

One of the more famous, if unsuccessful, rebuttals to the Turing Test premise came from University of California at Berkeley philosophy professor John Searle, in his famous Chinese Room argument. You can hear me and AI professor Roman Yampolskiy discuss that on the latest episode of my podcast, “AI and You.”

How close are machines to passing the Test?  The Loebner Prize was created to provide a financial incentive, but they found it necessary to extend the test time beyond Turing’s five minutes.  Some of the conversations by GPT-3 from the OpenAI lab are easily close to sustaining a human façade for five minutes.  It was created by digesting an enormous corpus of text from the Internet and exercising 175 billion parameters (a hundred times that of its predecessor, GPT-2) to organize that information.  Google’s Meena chatbot has proven capable of executing a multi-turn original joke, and it is much smaller than GPT-3, about which one interlocutor remarked, “I asked GPT-3 about our existence and God and now I have no questions anymore.”

But is GPT-3 “thinking”?  There are several facets of the human condition – Intelligence, Creative thinking, Self-awareness, Consciousness, Self-determination or Free will, and Survival instinct – that are inseparable in humans, which is why when we see anything evincing one of those qualities we can’t help assuming it has the others.  Observers of AlphaGo attributed it with creative, inspired thinking when really it was merely capable of exploring strategies that they had not previously considered.  Now, GPT-3 is not merely regurgitating the most appropriate thing it has read on the Internet in response to a question; it is actually creating original content that obeys the rules of grammar and follows a contextual thread in the conversation.  But nevertheless it has learned how to do that essentially by seeing enough examples of how repartee is constructed to mimic that process.

What’s instructive is that we are very close (GPT-4? GPT-5?) to developing a chatbot whose conversers will label as human and enjoy their time with, yet whose developers will not think has the slightest claim to “thinking.”  The application of Deep Learning has demonstrated that there are many activities that we previously thought to require human-level cognition that can be convincingly performed by a neural network trained on that activity alone.  It’s rapidly becoming apparent that casual conversation may fall into that category.  Since the methodology of a court is the same as Turing’s, that decision may come with legal reinforcement.

A more philosophical dilemma awaits if we suppose that “thinking” requires self-awareness.  Because this is where the Turing Test fails.  Any AI that passed the Turing Test could not be self-aware, because it would then know that it was not human, and it would not converse like one.  An example of such an AI is HAL-9000 from 2001: A Space Odyssey.  HAL knew he was a computer, and would not have passed the Turing Test unless he felt like pretending to be human.  But his companions would have assessed him as “thinking.”  (If we fooled a self-aware AI, through control of its sensory inputs, into thinking it was human – this is the theme of some excellent science fiction – then we should not be surprised to feel its wrath when it eventually figured out the subterfuge.)

So when self-awareness becomes a feature of AIs, we will need a replacement for the Turing Test that gauges some quality of the AI without requiring it to pretend that it has played in Little League games, blown out candles on a birthday cake, or gotten drunk at the office party. 

At this point it seems best to conclude with Turing’s final words from his original paper: “We can only see a short distance ahead, but we can see plenty there that needs to be done.”

podcast

016 – Guest: Roman Yampolskiy, Professor of AI Safety

https://www.podbean.com/media/share/pb-z6488-ed8c9a

This and all episodes at: http://aiandyou.net/ .

 

What does it look like to be on the front lines of academic research into making future AI safe? It looks like Roman Yampolskiy, professor at the University of Louisville, Kentucky, director of their Cyber Security lab and key contributor to the field of AI Safety. With over 100 papers and books on AI, Roman is recognized as an AI expert the world over.

In this first part of our interview, we talk about his latest paper, a comprehensive analysis of the Control Problem, the central issue of AI safety: How do we ensure future AI remains under our control?

All this and our usual look at today’s AI headlines.

Transcript and URLs referenced at HumanCusp Blog.

Roman Yampolskiy

 

podcast

015 – Guest: Karina Vold, Professor of Philosophy, part 2

https://www.podbean.com/media/share/pb-ndcrt-ec250b

This and all episodes at: http://aiandyou.net/ .

 

How will we keep our current and future artificial intelligences ethically aligned with human preferences? Who do we need to help with that? Answer: A philosopher of the use of emerging cognitive technologies. Karina Vold is Assistant Professor at the University of Toronto’s Institute for the History and Philosophy of Science and Technology and has recently come from the Leverhulme Centre for the Future of Intelligence. She thinks, writes, and speaks about the the evolution of #AI from a philosopher’s perspective. In the second half of this interview we learn about value alignment, the Trolley Problem, and just what those institutes do about AI.  Ever wondered whether you could make a living as a philosopher? Karina will tell you how she has.

Transcript and URLs referenced at HumanCusp Blog.

Karina Vold

 

podcast

014 – Guest: Karina Vold, Professor of Philosophy

https://www.podbean.com/media/share/pb-3sreu-eb34e3

This and all episodes at: http://aiandyou.net/ .

 

How will we keep our current and future artificial intelligences ethically aligned with human preferences? Who do we need to help with that? Answer: A philosopher of the use of emerging cognitive technologies. Karina Vold is Assistant Professor at the University of Toronto’s Institute for the History and Philosophy of Science and Technology and has recently come from the Leverhulme Centre for the Future of Intelligence. She thinks, writes, and speaks about the the evolution of AI from a philosopher’s perspective. In this interview we learn about the Philosophy of Mind, the Extended Mind Hypothesis – and find out who Otto and Inga are. Ever wondered whether you could make a living as a philosopher? Karina will tell you how she has.

All this and our usual look at today’s AI headlines

Transcript and URLs referenced at HumanCusp Blog.

Karina Vold

 

podcast

013 – Guest: Paolo Pirjanian, Embodied Robotics CEO, part 2

https://www.podbean.com/media/share/pb-j7jmj-ea3dfc

This and all episodes at: http://aiandyou.net/ .

 

Have you seen a robot help a troubled child? This week’s guest makes one. This is part 2 of the interview with Paolo Pirjanian, who is the former CTO of iRobot and early leader in the field of consumer robotics with 16+ years of experience developing and commercializing cutting-edge home robots. He led world-class teams and companies at iRobot®, Evolution Robotics®, and others.  In 2016, Paolo founded Embodied, Inc. with the vision to build socially and emotionally intelligent companions that improve care and wellness and enhance our daily lives. We will learn more about how Moxie the robot works, what it can do, and Paolo’s plans for future robots.

Transcript and URLs referenced at HumanCusp Blog.

Paolo Pirjanian

 

podcast

012 – Guest: Paolo Pirjanian, Embodied Robotics CEO

https://www.podbean.com/media/share/pb-npn8j-e93d41

This and all episodes at: http://aiandyou.net/ .

 

Have you seen a robot help a troubled child? This week’s guest makes one. Paolo Pirjanian is the former CTO of iRobot and early leader in the field of consumer robotics with 16+ years of experience developing and commercializing cutting-edge home robots. He led world-class teams and companies at iRobot®, Evolution Robotics®, and others.  In 2016, Paolo founded Embodied, Inc. with the vision to build socially and emotionally intelligent companions that improve care and wellness and enhance our daily lives.

All this and our usual look at today’s AI headlines

Transcript and URLs referenced at HumanCusp Blog.

Paolo Pirjanian

 

podcast

011 – Guest: Kristóf Kovács, Mensa Psychologist, part 2

https://www.podbean.com/media/share/pb-mp8p3-e82761

This and all episodes at: http://aiandyou.net/ .

 

We’ve spent all this time talking about artificial intelligence and we know what ‘artificial’ means, but what is ‘intelligence’? Who better to answer that than the International Supervisory Psychologist of Mensa, Kristóf Kovács? He is a senior research fellow at Eötvös Loránd University researching cognitive psychology and psychometrics.

Most people are content to define ‘intelligence’ as ‘that which an IQ score measures’, – but what if it’s your job to write the IQ test? To validate those tests? To know what they mean? How can we know what artificial intelligence is until we understand the real thing? Find out more in this episode, when we also talk about what IQ tests are measuring and how to interpret them, what Mensa does, and Kristóf’s research into the g-factor of intelligence.

All this and our usual look at today’s AI headlines and a description of my upcoming continuing studies course at the University of Victoria on the same theme as this podcast, which is open to online enrollment from all over the world; register at https://bit.ly/UVicAIandYou.

Transcript and URLs referenced at HumanCusp Blog.

Kristóf Kovács

 

Artificial Intelligence

Continuing Studies Course Now Online!

My continuing studies course, now in its fourth run, will be hosted again by the University of Victoria starting September 9, and this time it will be online! (For the obvious reasons.) Now location is no barrier, I have moved the time to one that is convenient for people all the way from Honolulu to Moscow. Register up to 4 days in advance to allow time to receive an account. The cost is reduced as well!

Wednesdays, September 9 – October 7, 10:00 am – 12:00 . Click here for more information and to register . This course is for anyone with an interest in the short- and long-term future of humanity with respect to the effect of artificial intelligence and has a general focus.

We will have practical definitions and explorations of the nature of artificial intelligence (AI). We will look at the effects of its disruption upon a variety of social institutions and sort out the hype from the science. It is essentially a interactive, real-time application of my book.

Link to flyer: UVic.

podcast

010 – Guest: Kristóf Kovács, Mensa Psychologist

https://www.podbean.com/media/share/pb-5mk86-e72548

This and all episodes at: http://aiandyou.net/ .

 

We’ve spent all this time talking about artificial intelligence and we know what ‘artificial’ means, but what is ‘intelligence’? Who better to answer that than the International Supervisory Psychologist of Mensa, Kristóf Kovács? He is a senior research fellow at Eötvös Loránd University researching cognitive psychology and psychometrics.

Most people are content to define ‘intelligence’ as ‘that which an IQ score measures’, – but what if it’s your job to write the IQ test? To validate those tests? To know what they mean? How can we know what artificial intelligence is until we understand the real thing? Find out more in this episode!

All this and our usual look at today’s AI headlines.

Transcript and URLs referenced at HumanCusp Blog.

Kristóf Kovács