#154 - Rohin Shah on DeepMind and trying to fairly hear out both AI doomers and doubters

#154 - Rohin Shah on DeepMind and trying to fairly hear out both AI doomers and doubters

Can there be a more exciting and strange place to work today than a leading AI lab? Your CEO has said they're worried your research could cause human extinction. The government is setting up meetings to discuss how this outcome can be avoided. Some of your colleagues think this is all overblown; others are more anxious still.

Today's guest — machine learning researcher Rohin Shah — goes into the Google DeepMind offices each day with that peculiar backdrop to his work.

Links to learn more, summary and full transcript.

He's on the team dedicated to maintaining 'technical AI safety' as these models approach and exceed human capabilities: basically that the models help humanity accomplish its goals without flipping out in some dangerous way. This work has never seemed more important.

In the short-term it could be the key bottleneck to deploying ML models in high-stakes real-life situations. In the long-term, it could be the difference between humanity thriving and disappearing entirely.

For years Rohin has been on a mission to fairly hear out people across the full spectrum of opinion about risks from artificial intelligence -- from doomers to doubters -- and properly understand their point of view. That makes him unusually well placed to give an overview of what we do and don't understand. He has landed somewhere in the middle — troubled by ways things could go wrong, but not convinced there are very strong reasons to expect a terrible outcome.

Today's conversation is wide-ranging and Rohin lays out many of his personal opinions to host Rob Wiblin, including:

  • What he sees as the strongest case both for and against slowing down the rate of progress in AI research.
  • Why he disagrees with most other ML researchers that training a model on a sensible 'reward function' is enough to get a good outcome.
  • Why he disagrees with many on LessWrong that the bar for whether a safety technique is helpful is “could this contain a superintelligence.”
  • That he thinks nobody has very compelling arguments that AI created via machine learning will be dangerous by default, or that it will be safe by default. He believes we just don't know.
  • That he understands that analogies and visualisations are necessary for public communication, but is sceptical that they really help us understand what's going on with ML models, because they're different in important ways from every other case we might compare them to.
  • Why he's optimistic about DeepMind’s work on scalable oversight, mechanistic interpretability, and dangerous capabilities evaluations, and what each of those projects involves.
  • Why he isn't inherently worried about a future where we're surrounded by beings far more capable than us, so long as they share our goals to a reasonable degree.
  • Why it's not enough for humanity to know how to align AI models — it's essential that management at AI labs correctly pick which methods they're going to use and have the practical know-how to apply them properly.
  • Three observations that make him a little more optimistic: humans are a bit muddle-headed and not super goal-orientated; planes don't crash; and universities have specific majors in particular subjects.
  • Plenty more besides.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris

Audio mastering: Milo McGuire, Dominic Armstrong, and Ben Cordell

Transcriptions: Katy Moore

Avsnitt(325)

Rob Wiblin on plastic straws, nicotine, doping, & whether changing the long-term is really possible

Rob Wiblin on plastic straws, nicotine, doping, & whether changing the long-term is really possible

Today's episode is a compilation of interviews I recently recorded for two other shows, Love Your Work and The Neoliberal Podcast.  If you've listened to absolutely everything on this podcast feed, y...

25 Sep 20193h 14min

Have we helped you have a bigger social impact? Our annual survey, plus other ways we can help you.

Have we helped you have a bigger social impact? Our annual survey, plus other ways we can help you.

1. Fill out our annual impact survey here. 2. Find a great vacancy on our job board. 3. Learn about our key ideas, and get links to our top articles. 4. Join our newsletter for an email about what's n...

16 Sep 20193min

#63 – Vitalik Buterin on better ways to fund public goods, blockchain's failures, & effective giving

#63 – Vitalik Buterin on better ways to fund public goods, blockchain's failures, & effective giving

Historically, progress in the field of cryptography has had major consequences. It has changed the course of major wars, made it possible to do business on the internet, and enabled private communicat...

3 Sep 20193h 18min

#62 – Paul Christiano on messaging the future, increasing compute, & how CO2 impacts your brain

#62 – Paul Christiano on messaging the future, increasing compute, & how CO2 impacts your brain

Imagine that – one day – humanity dies out. At some point, many millions of years later, intelligent life might well evolve again. Is there any message we could leave that would reliably help them out...

5 Aug 20192h 11min

#61 - Helen Toner on emerging technology, national security, and China

#61 - Helen Toner on emerging technology, national security, and China

From 1870 to 1950, the introduction of electricity transformed life in the US and UK, as people gained access to lighting, radio and a wide range of household appliances for the first time. Electricit...

17 Juli 20191h 54min

#60 - Phil Tetlock on why accurate forecasting matters for everything, and how you can do it better

#60 - Phil Tetlock on why accurate forecasting matters for everything, and how you can do it better

Have you ever been infuriated by a doctor's unwillingness to give you an honest, probabilistic estimate about what to expect? Or a lawyer who won't tell you the chances you'll win your case? Their beh...

28 Juni 20192h 11min

#59 – Cass Sunstein on how change happens, and why it's so often abrupt & unpredictable

#59 – Cass Sunstein on how change happens, and why it's so often abrupt & unpredictable

It can often feel hopeless to be an activist seeking social change on an obscure issue where most people seem opposed or at best indifferent to you. But according to a new book by Professor Cass Sunst...

17 Juni 20191h 43min

#58 – Pushmeet Kohli of DeepMind on designing robust & reliable AI systems and how to succeed in AI

#58 – Pushmeet Kohli of DeepMind on designing robust & reliable AI systems and how to succeed in AI

When you're building a bridge, responsibility for making sure it won't fall over isn't handed over to a few 'bridge not falling down engineers'. Making sure a bridge is safe to use and remains standin...

3 Juni 20191h 30min

Populärt inom Utbildning

rss-bara-en-till-om-missbruk-medberoende-2
historiepodden-se
det-skaver
alska-oss
harrisons-dramatiska-historia
nu-blir-det-historia
not-fanny-anymore
johannes-hansen-podcast
roda-vita-rosen
sektledare
allt-du-velat-veta
rss-viktmedicinpodden
i-vantan-pa-katastrofen
sa-in-i-sjalen
rss-max-tant-med-max-villman
rss-sjalsligt-avkladd
rss-basta-livet
rss-om-vi-ska-vara-arliga
sex-pa-riktigt-med-marika-smith
rss-pa-insidan-med-bjorn-rudman