Ep 9 - Scaling AI safety research w/ Adam Gleave (CEO, FAR AI)

Ep 9 - Scaling AI safety research w/ Adam Gleave (CEO, FAR AI)

We speak with Adam Gleave, CEO of FAR AI (https://far.ai). FAR AI’s mission is to ensure AI systems are trustworthy & beneficial. They incubate & accelerate research that's too resource-intensive for academia but not ready for commercialisation. They work on everything from adversarial robustness, interpretability, preference learning, & more. We talk to Adam about: * The founding story of FAR as an AI safety org, and how it's different from the big commercial labs (e.g. OpenAI)...

Jaksot(15)

Ep 14 - Interp, latent robustness, RLHF limitations w/ Stephen Casper (PhD AI researcher, MIT)

Ep 14 - Interp, latent robustness, RLHF limitations w/ Stephen Casper (PhD AI researcher, MIT)

We speak with Stephen Casper, or "Cas" as his friends call him. Cas is a PhD student at MIT in the Computer Science (EECS) department, in the Algorithmic Alignment Group advised by Prof Dylan Hadfield...

19 Kesä 20242h 42min

Ep 13 - AI researchers expect AGI sooner  w/ Katja Grace (Co-founder & Lead Researcher, AI Impacts)

Ep 13 - AI researchers expect AGI sooner w/ Katja Grace (Co-founder & Lead Researcher, AI Impacts)

We speak with Katja Grace. Katja is the co-founder and lead researcher at AI Impacts, a research group trying to answer key questions about the future of AI — when certain capabilities will arise, wha...

19 Kesä 20241h 20min

Ep 12 - Education & advocacy for AI safety w/ Rob Miles (YouTube host)

Ep 12 - Education & advocacy for AI safety w/ Rob Miles (YouTube host)

We speak with Rob Miles. Rob is the host of the “Robert Miles AI Safety” channel on YouTube, the single most popular AI alignment video series out there — he has 145,000 subscribers and his top video ...

8 Maalis 20241h 21min

Ep 11 - Technical alignment overview w/ Thomas Larsen (Director of Strategy, Center for AI Policy)

Ep 11 - Technical alignment overview w/ Thomas Larsen (Director of Strategy, Center for AI Policy)

We speak with Thomas Larsen, Director for Strategy at the Center for AI Policy in Washington, DC, to do a "speed run" overview of all the major technical research directions in AI alignment. A great w...

14 Joulu 20231h 37min

Ep 10 - Accelerated training to become an AI safety researcher w/ Ryan Kidd (Co-Director, MATS)

Ep 10 - Accelerated training to become an AI safety researcher w/ Ryan Kidd (Co-Director, MATS)

We speak with Ryan Kidd, Co-Director at ML Alignment & Theory Scholars (MATS) program, previously "SERI MATS". MATS (https://www.matsprogram.org/) provides research mentorship, technical seminars, an...

8 Marras 20231h 16min

Ep 8 - Getting started in AI safety & alignment w/ Jamie Bernardi (AI Safety Lead, BlueDot Impact)

Ep 8 - Getting started in AI safety & alignment w/ Jamie Bernardi (AI Safety Lead, BlueDot Impact)

We speak with Jamie Bernardi, co-founder & AI Safety Lead at not-for-profit BlueDot Impact, who host the biggest and most up-to-date courses on AI safety & alignment at AI Safety Fundamentals (https:/...

12 Loka 20231h 7min

Ep 7 - Responding to a world with AGI - Richard Dazeley (Prof AI & ML, Deakin University)

Ep 7 - Responding to a world with AGI - Richard Dazeley (Prof AI & ML, Deakin University)

In this episode, we speak with Prof Richard Dazeley about the implications of a world with AGI and how we can best respond. We talk about what he thinks AGI will actually look like as well as the tech...

3 Elo 20231h 10min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
voi-hyvin-meditaatiot-2
adhd-podi
rss-liian-kuuma-peruna
rss-niinku-asia-on
rss-rahamania
jari-sarasvuo-podcast
rss-tietoinen-yhteys-podcast-2
rss-valo-minussa-2
rss-vapaudu-voimaasi
kesken
psykologia
ihminen-tavattavissa-tommy-hellsten-instituutti
rss-narsisti
rss-arkea-ja-aurinkoa-podcast-espanjasta
rahapuhetta
filocast-filosofian-perusteet
rss-uskonto-on-tylsaa
rss-duodecim-lehti