#156 – Markus Anderljung on how to regulate cutting-edge AI models
80,000 Hours Podcast10 Heinä 2023

#156 – Markus Anderljung on how to regulate cutting-edge AI models

"At the front of the pack we have these frontier AI developers, and we want them to identify particularly dangerous models ahead of time. Once those mines have been discovered, and the frontier developers keep walking down the minefield, there's going to be all these other people who follow along. And then a really important thing is to make sure that they don't step on the same mines. So you need to put a flag down -- not on the mine, but maybe next to it.

And so what that looks like in practice is maybe once we find that if you train a model in such-and-such a way, then it can produce maybe biological weapons is a useful example, or maybe it has very offensive cyber capabilities that are difficult to defend against. In that case, we just need the regulation to be such that you can't develop those kinds of models." — Markus Anderljung

In today’s episode, host Luisa Rodriguez interviews the Head of Policy at the Centre for the Governance of AI — Markus Anderljung — about all aspects of policy and governance of superhuman AI systems.

Links to learn more, summary and full transcript.

They cover:

  • The need for AI governance, including self-replicating models and ChaosGPT
  • Whether or not AI companies will willingly accept regulation
  • The key regulatory strategies including licencing, risk assessment, auditing, and post-deployment monitoring
  • Whether we can be confident that people won't train models covertly and ignore the licencing system
  • The progress we’ve made so far in AI governance
  • The key weaknesses of these approaches
  • The need for external scrutiny of powerful models
  • The emergent capabilities problem
  • Why it really matters where regulation happens
  • Advice for people wanting to pursue a career in this field
  • And much more.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio Engineering Lead: Ben Cordell

Technical editing: Simon Monsour and Milo McGuire

Transcriptions: Katy Moore

Jaksot(325)

#180 – Hugo Mercier on why gullibility and misinformation are overrated

#180 – Hugo Mercier on why gullibility and misinformation are overrated

The World Economic Forum’s global risks survey of 1,400 experts, policymakers, and industry leaders ranked misinformation and disinformation as the number one global risk over the next two years — ran...

21 Helmi 20242h 36min

#179 – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

#179 – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

Mental health problems like depression and anxiety affect enormous numbers of people and severely interfere with their lives. By contrast, we don’t see similar levels of physical ill health in young p...

12 Helmi 20242h 56min

#178 – Emily Oster on what the evidence actually says about pregnancy and parenting

#178 – Emily Oster on what the evidence actually says about pregnancy and parenting

"I think at various times — before you have the kid, after you have the kid — it's useful to sit down and think about: What do I want the shape of this to look like? What time do I want to be spending...

1 Helmi 20242h 22min

#177 – Nathan Labenz on recent AI breakthroughs and navigating the growing rift between AI safety and accelerationist camps

#177 – Nathan Labenz on recent AI breakthroughs and navigating the growing rift between AI safety and accelerationist camps

Back in December we spoke with Nathan Labenz — AI entrepreneur and host of The Cognitive Revolution Podcast — about the speed of progress towards AGI and OpenAI's leadership drama, drawing on Nathan's...

24 Tammi 20242h 47min

#90 Classic episode – Ajeya Cotra on worldview diversification and how big the future could be

#90 Classic episode – Ajeya Cotra on worldview diversification and how big the future could be

You wake up in a mysterious box, and hear the booming voice of God: “I just flipped a coin. If it came up heads, I made ten boxes, labeled 1 through 10 — each of which has a human in it. If it came up...

12 Tammi 20242h 59min

#112 Classic episode – Carl Shulman on the common-sense case for existential risk work and its practical implications

#112 Classic episode – Carl Shulman on the common-sense case for existential risk work and its practical implications

Preventing the apocalypse may sound like an idiosyncratic activity, and it sometimes is justified on exotic grounds, such as the potential for humanity to become a galaxy-spanning civilisation.But the...

8 Tammi 20243h 50min

#111 Classic episode – Mushtaq Khan on using institutional economics to predict effective government reforms

#111 Classic episode – Mushtaq Khan on using institutional economics to predict effective government reforms

If you’re living in the Niger Delta in Nigeria, your best bet at a high-paying career is probably ‘artisanal refining’ — or, in plain language, stealing oil from pipelines.The resulting oil spills dam...

4 Tammi 20243h 22min

2023 Mega-highlights Extravaganza

2023 Mega-highlights Extravaganza

Happy new year! We've got a different kind of holiday release for you today. Rather than a 'classic episode,' we've put together one of our favourite highlights from each episode of the show that came...

31 Joulu 20231h 53min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
voi-hyvin-meditaatiot-2
rss-narsisti
adhd-podi
psykopodiaa-podcast
rss-rahamania
rss-uskonto-on-tylsaa
rss-valo-minussa-2
mielipaivakirja
rss-vapaudu-voimaasi
rss-niinku-asia-on
rss-duodecim-lehti
rahapuhetta
ilona-rauhala
aamukahvilla
aloita-meditaatio
kesken
dear-ladies
rss-eron-alkemiaa
rss-arkea-ja-aurinkoa-podcast-espanjasta