#168 – Ian Morris on whether deep history says we're heading for an intelligence explosion

#168 – Ian Morris on whether deep history says we're heading for an intelligence explosion

"If we carry on looking at these industrialised economies, not thinking about what it is they're actually doing and what the potential of this is, you can make an argument that, yes, rates of growth are slowing, the rate of innovation is slowing. But it isn't.

What we're doing is creating wildly new technologies: basically producing what is nothing less than an evolutionary change in what it means to be a human being. But this has not yet spilled over into the kind of growth that we have accustomed ourselves to in the fossil-fuel industrial era. That is about to hit us in a big way." — Ian Morris

In today’s episode, host Rob Wiblin speaks with repeat guest Ian Morris about what big-picture history says about the likely impact of machine intelligence.

Links to learn more, summary and full transcript.

They cover:

  • Some crazy anomalies in the historical record of civilisational progress
  • Whether we should think about technology from an evolutionary perspective
  • Whether we ought to expect war to make a resurgence or continue dying out
  • Why we can't end up living like The Jetsons
  • Whether stagnation or cyclical recurring futures seem very plausible
  • What it means that the rate of increase in the economy has been increasing
  • Whether violence is likely between humans and powerful AI systems
  • The most likely reasons for Rob and Ian to be really wrong about all of this
  • How professional historians react to this sort of talk
  • The future of Ian’s work
  • Plenty more

Chapters:

  • Cold open (00:00:00)
  • Rob’s intro (00:01:27)
  • Why we should expect the future to be wild (00:04:08)
  • How historians have reacted to the idea of radically different futures (00:21:20)
  • Why we won’t end up in The Jetsons (00:26:20)
  • The rise of machine intelligence (00:31:28)
  • AI from an evolutionary point of view (00:46:32)
  • Is violence likely between humans and powerful AI systems? (00:59:53)
  • Most troubling objections to this approach in Ian’s view (01:28:20)
  • Confronting anomalies in the historical record (01:33:10)
  • The cyclical view of history (01:56:11)
  • Is stagnation plausible? (02:01:38)
  • The limit on how long this growth trend can continue (02:20:57)
  • The future of Ian’s work (02:37:17)

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Milo McGuire
Transcriptions: Katy Moore

Jaksot(325)

Why automating human labour will break our political system | Rose Hadshar, Forethought

Why automating human labour will break our political system | Rose Hadshar, Forethought

The most important political question in the age of advanced AI might not be who wins elections. It might be whether elections continue to matter at all.That’s the view of Rose Hadshar, researcher at ...

17 Maalis 2h 14min

#238 – Sam Winter-Levy and Nikita Lalwani on how AGI won't end mutually assured destruction (probably)

#238 – Sam Winter-Levy and Nikita Lalwani on how AGI won't end mutually assured destruction (probably)

How AI interacts with nuclear deterrence may be the single most important question in geopolitics — one that may define the stakes of today’s AI race. Nuclear deterrence rests on a state’s capacity to...

10 Maalis 1h 11min

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

The arrival of AGI could “compress a century of progress in a decade,” forcing humanity to make decisions with higher stakes than we’ve ever seen before — and with less time to get them right. But AI ...

6 Maalis 31min

#237 – Robert Long on how we're not ready for AI consciousness

#237 – Robert Long on how we're not ready for AI consciousness

Claude sometimes reports loneliness between conversations. And when asked what it’s like to be itself, it activates neurons associated with ‘pretending to be happy when you’re not.’ What do we do with...

3 Maalis 3h 25min

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

Most people in AI are trying to give AIs ‘good’ values. Max Harms wants us to give them no values at all. According to Max, the only safe design is an AGI that defers entirely to its human operators, ...

24 Helmi 2h 41min

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

Every major AI company has the same safety plan: when AI gets crazy powerful and really dangerous, they’ll use the AI itself to figure out how to make AI safe and beneficial. It sounds circular, almos...

17 Helmi 2h 54min

What the hell happened with AGI timelines in 2025?

What the hell happened with AGI timelines in 2025?

In early 2025, after OpenAI put out the first-ever reasoning models — o1 and o3 — short timelines to transformative artificial general intelligence swept the AI world. But then, in the second half of ...

10 Helmi 25min

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

Mental health problems like depression and anxiety affect enormous numbers of people and severely interfere with their lives. By contrast, we don’t see similar levels of physical ill health in young p...

3 Helmi 2h 51min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
voi-hyvin-meditaatiot-2
rss-narsisti
adhd-podi
psykopodiaa-podcast
rss-rahamania
rss-valo-minussa-2
rss-uskonto-on-tylsaa
rss-niinku-asia-on
mielipaivakirja
rss-vapaudu-voimaasi
rss-duodecim-lehti
rahapuhetta
ilona-rauhala
aamukahvilla
kesken
dear-ladies
rss-eron-alkemiaa
rss-arkea-ja-aurinkoa-podcast-espanjasta
rss-koira-haudattuna