#3 - Dario Amodei on OpenAI and how AI will change the world for good and ill

#3 - Dario Amodei on OpenAI and how AI will change the world for good and ill

Just two years ago OpenAI didn’t exist. It’s now among the most elite groups of machine learning researchers. They’re trying to make an AI that’s smarter than humans and have $1b at their disposal.

Even stranger for a Silicon Valley start-up, it’s not a business, but rather a non-profit founded by Elon Musk and Sam Altman among others, to ensure the benefits of AI are distributed broadly to all of society.

I did a long interview with one of its first machine learning researchers, Dr Dario Amodei, to learn about:

* OpenAI’s latest plans and research progress.
* His paper *Concrete Problems in AI Safety*, which outlines five specific ways machine learning algorithms can act in dangerous ways their designers don’t intend - something OpenAI has to work to avoid.
* How listeners can best go about pursuing a career in machine learning and AI development themselves.

Full transcript, apply for personalised coaching to work on AI safety, see what questions are asked when, and read extra resources to learn more.

1m33s - What OpenAI is doing, Dario’s research and why AI is important
13m - Why OpenAI scaled back its Universe project
15m50s - Why AI could be dangerous
24m20s - Would smarter than human AI solve most of the world’s problems?
29m - Paper on five concrete problems in AI safety
43m48s - Has OpenAI made progress?
49m30s - What this back flipping noodle can teach you about AI safety
55m30s - How someone can pursue a career in AI safety and get a job at OpenAI
1h02m30s - Where and what should people study?
1h4m15s - What other paradigms for AI are there?
1h7m55s - How do you go from studying to getting a job? What places are there to work?
1h13m30s - If there's a 17-year-old listening here what should they start reading first?
1h19m - Is this a good way to develop your broader career options? Is it a safe move?
1h21m10s - What if you’re older and haven’t studied machine learning? How do you break in?
1h24m - What about doing this work in academia?
1h26m50s - Is the work frustrating because solutions may not exist?
1h31m35s - How do we prevent a dangerous arms race?
1h36m30s - Final remarks on how to get into doing useful work in machine learning

Episoder(324)

AGI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani

AGI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani

How AI interacts with nuclear deterrence may be the single most important question in geopolitics — one that may define the stakes of today’s AI race. Nuclear deterrence rests on a state’s capacity to...

10 Mar 1h 11min

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

The arrival of AGI could “compress a century of progress in a decade,” forcing humanity to make decisions with higher stakes than we’ve ever seen before — and with less time to get them right. But AI ...

6 Mar 31min

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

Claude sometimes reports loneliness between conversations. And when asked what it’s like to be itself, it activates neurons associated with ‘pretending to be happy when you’re not.’ What do we do with...

3 Mar 3h 25min

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

Most people in AI are trying to give AIs ‘good’ values. Max Harms wants us to give them no values at all. According to Max, the only safe design is an AGI that defers entirely to its human operators, ...

24 Feb 2h 41min

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

Every major AI company has the same safety plan: when AI gets crazy powerful and really dangerous, they’ll use the AI itself to figure out how to make AI safe and beneficial. It sounds circular, almos...

17 Feb 2h 54min

What the hell happened with AGI timelines in 2025?

What the hell happened with AGI timelines in 2025?

In early 2025, after OpenAI put out the first-ever reasoning models — o1 and o3 — short timelines to transformative artificial general intelligence swept the AI world. But then, in the second half of ...

10 Feb 25min

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

Mental health problems like depression and anxiety affect enormous numbers of people and severely interfere with their lives. By contrast, we don’t see similar levels of physical ill health in young p...

3 Feb 2h 51min

#234 – David Duvenaud on why 'aligned AI' would still kill democracy

#234 – David Duvenaud on why 'aligned AI' would still kill democracy

Democracy might be a brief historical blip. That’s the unsettling thesis of a recent paper, which argues AI that can do all the work a human can do inevitably leads to the “gradual disempowerment” of ...

27 Jan 2h 31min

Populært innen Fakta

fastlegen
dine-penger-pengeradet
relasjonspodden-med-dora-thorhallsdottir-kjersti-idem
treningspodden
rss-strid-de-norske-borgerkrigene
foreldreradet
jakt-og-fiskepodden
hverdagspsyken
rss-sunn-okonomi
merry-quizmas
sinnsyn
gravid-uke-for-uke
tomprat-med-gunnar-tjomlid
rss-kunsten-a-leve
smart-forklart
fryktlos
rss-mann-i-krise-med-sagen
hagespiren-podcast
rss-impressions-2
dopet