#3 - Dario Amodei on OpenAI and how AI will change the world for good and ill
80,000 Hours Podcast21 Heinä 2017

#3 - Dario Amodei on OpenAI and how AI will change the world for good and ill

Just two years ago OpenAI didn’t exist. It’s now among the most elite groups of machine learning researchers. They’re trying to make an AI that’s smarter than humans and have $1b at their disposal.

Even stranger for a Silicon Valley start-up, it’s not a business, but rather a non-profit founded by Elon Musk and Sam Altman among others, to ensure the benefits of AI are distributed broadly to all of society.

I did a long interview with one of its first machine learning researchers, Dr Dario Amodei, to learn about:

* OpenAI’s latest plans and research progress.
* His paper *Concrete Problems in AI Safety*, which outlines five specific ways machine learning algorithms can act in dangerous ways their designers don’t intend - something OpenAI has to work to avoid.
* How listeners can best go about pursuing a career in machine learning and AI development themselves.

Full transcript, apply for personalised coaching to work on AI safety, see what questions are asked when, and read extra resources to learn more.

1m33s - What OpenAI is doing, Dario’s research and why AI is important
13m - Why OpenAI scaled back its Universe project
15m50s - Why AI could be dangerous
24m20s - Would smarter than human AI solve most of the world’s problems?
29m - Paper on five concrete problems in AI safety
43m48s - Has OpenAI made progress?
49m30s - What this back flipping noodle can teach you about AI safety
55m30s - How someone can pursue a career in AI safety and get a job at OpenAI
1h02m30s - Where and what should people study?
1h4m15s - What other paradigms for AI are there?
1h7m55s - How do you go from studying to getting a job? What places are there to work?
1h13m30s - If there's a 17-year-old listening here what should they start reading first?
1h19m - Is this a good way to develop your broader career options? Is it a safe move?
1h21m10s - What if you’re older and haven’t studied machine learning? How do you break in?
1h24m - What about doing this work in academia?
1h26m50s - Is the work frustrating because solutions may not exist?
1h31m35s - How do we prevent a dangerous arms race?
1h36m30s - Final remarks on how to get into doing useful work in machine learning

Jaksot(325)

#194 – Vitalik Buterin on defensive acceleration and how to regulate AI when you fear government

#194 – Vitalik Buterin on defensive acceleration and how to regulate AI when you fear government

"If you’re a power that is an island and that goes by sea, then you’re more likely to do things like valuing freedom, being democratic, being pro-foreigner, being open-minded, being interested in trad...

26 Heinä 20243h 4min

#193 – Sihao Huang on navigating the geopolitics of US–China AI competition

#193 – Sihao Huang on navigating the geopolitics of US–China AI competition

"You don’t necessarily need world-leading compute to create highly risky AI systems. The biggest biological design tools right now, like AlphaFold’s, are orders of magnitude smaller in terms of comput...

18 Heinä 20242h 23min

#192 – Annie Jacobsen on what would happen if North Korea launched a nuclear weapon at the US

#192 – Annie Jacobsen on what would happen if North Korea launched a nuclear weapon at the US

"Ring one: total annihilation; no cellular life remains. Ring two, another three-mile diameter out: everything is ablaze. Ring three, another three or five miles out on every side: third-degree burns ...

12 Heinä 20241h 54min

#191 (Part 2) – Carl Shulman on government and society after AGI

#191 (Part 2) – Carl Shulman on government and society after AGI

This is the second part of our marathon interview with Carl Shulman. The first episode is on the economy and national security after AGI. You can listen to them in either order!If we develop artificia...

5 Heinä 20242h 20min

#191 (Part 1) – Carl Shulman on the economy and national security after AGI

#191 (Part 1) – Carl Shulman on the economy and national security after AGI

This is the first part of our marathon interview with Carl Shulman. The second episode is on government and society after AGI. You can listen to them in either order!The human brain does what it does ...

27 Kesä 20244h 14min

#190 – Eric Schwitzgebel on whether the US is conscious

#190 – Eric Schwitzgebel on whether the US is conscious

"One of the most amazing things about planet Earth is that there are complex bags of mostly water — you and me – and we can look up at the stars, and look into our brains, and try to grapple with the ...

7 Kesä 20242h

#189 – Rachel Glennerster on why we still don’t have vaccines that could save millions

#189 – Rachel Glennerster on why we still don’t have vaccines that could save millions

"You can’t charge what something is worth during a pandemic. So we estimated that the value of one course of COVID vaccine in January 2021 was over $5,000. They were selling for between $6 and $40. So...

29 Touko 20242h 48min

#188 – Matt Clancy on whether science is good

#188 – Matt Clancy on whether science is good

"Suppose we make these grants, we do some of those experiments I talk about. We discover, for example — I’m just making this up — but we give people superforecasting tests when they’re doing peer revi...

23 Touko 20242h 40min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
voi-hyvin-meditaatiot-2
rss-narsisti
adhd-podi
psykopodiaa-podcast
rss-uskonto-on-tylsaa
rss-rahamania
rss-duodecim-lehti
rss-valo-minussa-2
rss-vapaudu-voimaasi
rss-liian-kuuma-peruna
rahapuhetta
rss-niinku-asia-on
aloita-meditaatio
kesken
dear-ladies
mielipaivakirja
rss-eron-alkemiaa
rss-tietoinen-yhteys-podcast-2
aamukahvilla