Podme logo
KotiLöydäKategoriatEtsiOpiskelijoille
#177 – Nathan Labenz on recent AI breakthroughs and navigating the growing rift between AI safety and accelerationist camps

#177 – Nathan Labenz on recent AI breakthroughs and navigating the growing rift between AI safety and accelerationist camps

02:47:092024-01-24

Jaksokuvaus

Back in December we spoke with Nathan Labenz — AI entrepreneur and host of The Cognitive Revolution Podcast — about the speed of progress towards AGI and OpenAI's leadership drama, drawing on Nathan's alarming experience red-teaming an early version of GPT-4 and resulting conversations with OpenAI staff and board members.Links to learn more, video, highlights, and full transcript.Today we go deeper, diving into:What AI now actually can and can’t do, across language and visual models, medicine, scientific research, self-driving cars, robotics, weapons — and what the next big breakthrough might be.Why most people, including most listeners, probably don’t know and can’t keep up with the new capabilities and wild results coming out across so many AI applications — and what we should do about that.How we need to learn to talk about AI more productively, particularly addressing the growing chasm between those concerned about AI risks and those who want to see progress accelerate, which may be counterproductive for everyone.Where Nathan agrees with and departs from the views of ‘AI scaling accelerationists.’The chances that anti-regulation rhetoric from some AI entrepreneurs backfires.How governments could (and already do) abuse AI tools like facial recognition, and how militarisation of AI is progressing.Preparing for coming societal impacts and potential disruption from AI.Practical ways that curious listeners can try to stay abreast of everything that’s going on.And plenty more.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireTranscriptions: Katy Moore

Uusimmat jaksot

80,000 Hours Podcast
80,000 Hours Podcast

#201 – Ken Goldberg on why your robot butler isn’t here yet

2024-09-132h 1min
80,000 Hours Podcast
80,000 Hours Podcast

#200 – Ezra Karger on what superforecasters and experts think about existential risks

2024-09-042h 49min
80,000 Hours Podcast
80,000 Hours Podcast

#199 – Nathan Calvin on California’s AI bill SB 1047 and its potential to shape US AI policy

2024-08-291h 12min
80,000 Hours Podcast
80,000 Hours Podcast

#198 – Meghan Barrett on challenging our assumptions about insects

2024-08-263h 48min
80,000 Hours Podcast
80,000 Hours Podcast

#197 – Nick Joseph on whether Anthropic's AI safety policy is up to the task

2024-08-222h 29min
80,000 Hours Podcast
80,000 Hours Podcast

#196 – Jonathan Birch on the edge cases of sentience and why they matter

2024-08-152h 1min
80,000 Hours Podcast
80,000 Hours Podcast

#195 – Sella Nevo on who's trying to steal frontier AI models, and what they could do with them

2024-08-012h 8min
80,000 Hours Podcast
80,000 Hours Podcast

#194 – Vitalik Buterin on defensive acceleration and how to regulate AI when you fear government

2024-07-263h 4min
80,000 Hours Podcast
80,000 Hours Podcast

#193 – Sihao Huang on the risk that US–China AI competition leads to war

2024-07-182h 23min
80,000 Hours Podcast
80,000 Hours Podcast

#192 – Annie Jacobsen on what would happen if North Korea launched a nuclear weapon at the US

2024-07-121h 54min
logo

PODME

TIEDOT

  • Evästekäytäntö
  • Käyttöehdot
  • Tietosuojakäytäntö
  • Medialle

LATAA SOVELLUKSEMME!

app storegoogle play store

ALUEELLA

flag
  • sweden_flag
  • norway_flag
  • finland_flag

© Podme AB 2024