Podcast roundup: The inadequacy of today's AI safety practices + how to make AI go better
Proof that I'm not a deepfake (or am a very convincing one)
Welcome! A number of folks have asked where they can hear more about my work and my time at OpenAI, so I’ve put together a round-up of different podcast appearances I’ve done.
In each of these, I talk in more detail about my time at OpenAI, my research on AI risk, and my policy solutions for making AI go better. If you’d like to have me on your show or otherwise suggest an idea for coverage, please feel free to get in touch here.
Future of Life Podcast
This is probably the most thorough unpacking of different articles I’ve written and my perspectives on the AI industry.
Factually! with Adam Conover
I make the case that today’s harms from AI and big ones on the horizon have more in common than people often think, and that the AI companies are earnest in their belief that they are building something incredibly dangerous.
(Note that I don’t endorse the whistleblowing frame in the title; that’s at the podcast’s discretion.)
WIRED’s The Big Interview
We talked about how AI companies can demonstrate trustworthiness to the public; the insufficiency of today’s safety testing; and what’s been going on with ChatGPT’s mental health impacts. Transcript available here.
The ControlAI Podcast
We talked about the race to build AGI as soon as possible; the inadequacy of current voluntary safety commitments and testing procedures; concerning AI behaviors like self-preservation and deception already being observed; and the risks of recursive self-improvement where AI systems are used to accelerate development of even more powerful AI.
The Cognitive Revolution
We talked about the safety questions that the AI industry continues to struggle with today; OpenAI’s attempted conversion from a non-profit to for-profit tech company; the exodus of OpenAI staff to form Anthropic; and the changing safety culture at AI labs.
Scaling Laws
We talked about how to improve testing of AI systems, what challenges need to be solved to make good AI outcomes more likely, and how to make AI safety less dependent on personal trust in the leadership of AI companies.
The World Can Be Better
We talked about how future AI systems might contribute to a bunch of harm, what evidence AI scientists are seeing today that make them concerned, and ways that people with a range of skill sets can contribute to solving these problems.
Robert Wright’s Non-Zero
We talked about what it means to “feel the AGI”; the different types of AI catastrophes; what it will take for US-China AI competition to end well; and what’s really happening with ChatGPT’s sycophancy.
Acknowledgements: The views expressed here are my own and do not imply endorsement by any other party. All of my writing and analysis is based solely on publicly available information.


