Tag: AI

Artificial IntelligenceEmploymentExistential RiskPhilosophyPolitics

Podcasting: The Triple Crown

In case it wasn’t already clear… I’m new at this whole social media outreach thing. But the message is more important than the messenger’s insecurities, so I’m working it anyway, knowing that eventually I’ll get better at it… after failing enough times.

So, I have three important announcements, all about podcasting.

First: On April 27, I was Blaine Bartlett‘s guest on his Soul of Business show (link).

Blaine is a friend of several years now and he is one of the most thoughtful, practically compassionate business consultants I know. He coaches top companies and their executives on how to be good and do good, while remaining competitive and relevant within a challenging world.

Second, I was Tom Dutta’s guest on his Quiet Warrior Show on June 16:

Part 2 will be released on June 23.

Tom spoke after me at TedXBearCreekPark, and embodies vulnerability in a good cause. He speaks candidly about his history with mental health and works to relieve the stigma that keeps executives from seeking help.

And finally… the very first episodes of my own podcast are nearly ready to be released! On Monday, June 22, at 10 am Pacific Time, the first episode of AI and You will appear. I’m still figuring this podcasting thing out, so if you’ve been down this road before and can see where I’m making some mistakes… let me know! Show link.

Artificial IntelligenceSpotlight

Classification Schemas for AI Failures

I am exceedingly happy to report the release of my first academic paper in this field, “Classification Schemas for Artificial Intelligence Failures.” It is on the arXiv site here.

Last year I reached out to professor Roman Yampolskiy, one of the endorsers of Crisis of Control, a prolific author and speaker on the topic of AI Safety (and who coined that term), asking whether there was anything I could do to assist, and he suggested writing a paper on AI failures, which he would supply data for.

Roman wrote a 2018 paper cataloguing several dozen failures of artificial intelligence and other cybernetic systems, raising important questions about where this trend might go as AIs become more complex and powerful. My paper extends that analysis to further questions and a classification schema that facilitates the categorization of AI failures.

I’m excited at how it turned out and I look forward to it being picked up in a suitable journal!