#150 – Tom Davidson on how quickly AI could transform the world

#150 – Tom Davidson on how quickly AI could transform the world

It’s easy to dismiss alarming AI-related predictions when you don’t know where the numbers came from.

For example: what if we told you that within 15 years, it’s likely that we’ll see a 1,000x improvement in AI capabilities in a single year? And what if we then told you that those improvements would lead to explosive economic growth unlike anything humanity has seen before?

You might think, “Congratulations, you said a big number — but this kind of stuff seems crazy, so I’m going to keep scrolling through Twitter.”

But this 1,000x yearly improvement is a prediction based on *real economic models* created by today’s guest Tom Davidson, Senior Research Analyst at Open Philanthropy. By the end of the episode, you’ll either be able to point out specific flaws in his step-by-step reasoning, or have to at least consider the idea that the world is about to get — at a minimum — incredibly weird.

Links to learn more, summary and full transcript.

As a teaser, consider the following:

Developing artificial general intelligence (AGI) — AI that can do 100% of cognitive tasks at least as well as the best humans can — could very easily lead us to an unrecognisable world.

You might think having to train AI systems individually to do every conceivable cognitive task — one for diagnosing diseases, one for doing your taxes, one for teaching your kids, etc. — sounds implausible, or at least like it’ll take decades.

But Tom thinks we might not need to train AI to do every single job — we might just need to train it to do one: AI research.

And building AI capable of doing research and development might be a much easier task — especially given that the researchers training the AI are AI researchers themselves.

And once an AI system is as good at accelerating future AI progress as the best humans are today — and we can run billions of copies of it round the clock — it’s hard to make the case that we won’t achieve AGI very quickly.

To give you some perspective: 17 years ago we saw the launch of Twitter, the release of Al Gore's *An Inconvenient Truth*, and your first chance to play the Nintendo Wii.

Tom thinks that if we have AI that significantly accelerates AI R&D, then it’s hard to imagine not having AGI 17 years from now.

Wild.

Host Luisa Rodriguez gets Tom to walk us through his careful reports on the topic, and how he came up with these numbers, across a terrifying but fascinating three hours.

Luisa and Tom also discuss:

• How we might go from GPT-4 to AI disaster
• Tom’s journey from finding AI risk to be kind of scary to really scary
• Whether international cooperation or an anti-AI social movement can slow AI progress down
• Why it might take just a few years to go from pretty good AI to superhuman AI
• How quickly the number and quality of computer chips we’ve been using for AI have been increasing
• The pace of algorithmic progress
• What ants can teach us about AI
• And much more

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:04:53)
  • How we might go from GPT-4 to disaster (00:13:50)
  • Explosive economic growth (00:24:15)
  • Are there any limits for AI scientists? (00:33:17)
  • This seems really crazy (00:44:16)
  • How is this going to go for humanity? (00:50:49)
  • Why AI won’t go the way of nuclear power (01:00:13)
  • Can we definitely not come up with an international treaty? (01:05:24)
  • How quickly we should expect AI to “take off” (01:08:41)
  • Tom’s report on AI takeoff speeds (01:22:28)
  • How quickly will we go from 20% to 100% of tasks being automated by AI systems? (01:28:34)
  • What percent of cognitive tasks AI can currently perform (01:34:27)
  • Compute (01:39:48)
  • Using effective compute to predict AI takeoff speeds (01:48:01)
  • How quickly effective compute might increase (02:00:59)
  • How quickly chips and algorithms might improve (02:12:31)
  • How to check whether large AI models have dangerous capabilities (02:21:22)
  • Reasons AI takeoff might take longer (02:28:39)
  • Why AI takeoff might be very fast (02:31:52)
  • Fast AI takeoff speeds probably means shorter AI timelines (02:34:44)
  • Going from human-level AI to superhuman AI (02:41:34)
  • Going from AGI to AI deployment (02:46:59)
  • Were these arguments ever far-fetched to Tom? (02:49:54)
  • What ants can teach us about AI (02:52:45)
  • Rob’s outro (03:00:32)


Producer: Keiran Harris
Audio mastering: Simon Monsour and Ben Cordell
Transcriptions: Katy Moore

Avsnitt(324)

AI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani

AI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani

How AI interacts with nuclear deterrence may be the single most important question in geopolitics — one that may define the stakes of today’s AI race. Nuclear deterrence rests on a state’s capacity to...

10 Mars 1h 11min

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

The arrival of AGI could “compress a century of progress in a decade,” forcing humanity to make decisions with higher stakes than we’ve ever seen before — and with less time to get them right. But AI ...

6 Mars 31min

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

Claude sometimes reports loneliness between conversations. And when asked what it’s like to be itself, it activates neurons associated with ‘pretending to be happy when you’re not.’ What do we do with...

3 Mars 3h 25min

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

Most people in AI are trying to give AIs ‘good’ values. Max Harms wants us to give them no values at all. According to Max, the only safe design is an AGI that defers entirely to its human operators, ...

24 Feb 2h 41min

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

Every major AI company has the same safety plan: when AI gets crazy powerful and really dangerous, they’ll use the AI itself to figure out how to make AI safe and beneficial. It sounds circular, almos...

17 Feb 2h 54min

What the hell happened with AGI timelines in 2025?

What the hell happened with AGI timelines in 2025?

In early 2025, after OpenAI put out the first-ever reasoning models — o1 and o3 — short timelines to transformative artificial general intelligence swept the AI world. But then, in the second half of ...

10 Feb 25min

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

Mental health problems like depression and anxiety affect enormous numbers of people and severely interfere with their lives. By contrast, we don’t see similar levels of physical ill health in young p...

3 Feb 2h 51min

#234 – David Duvenaud on why 'aligned AI' would still kill democracy

#234 – David Duvenaud on why 'aligned AI' would still kill democracy

Democracy might be a brief historical blip. That’s the unsettling thesis of a recent paper, which argues AI that can do all the work a human can do inevitably leads to the “gradual disempowerment” of ...

27 Jan 2h 31min

Populärt inom Utbildning

rss-bara-en-till-om-missbruk-medberoende-2
historiepodden-se
det-skaver
alska-oss
harrisons-dramatiska-historia
rss-viktmedicinpodden
sektledare
nu-blir-det-historia
allt-du-velat-veta
johannes-hansen-podcast
roda-vita-rosen
rss-sjalsligt-avkladd
i-vantan-pa-katastrofen
sa-in-i-sjalen
not-fanny-anymore
sex-pa-riktigt-med-marika-smith
polisutbildningspodden
rss-om-vi-ska-vara-arliga
rss-max-tant-med-max-villman
rss-traningsklubben