#40 - Katja Grace on forecasting future technology & how much we should trust expert predictions

#40 - Katja Grace on forecasting future technology & how much we should trust expert predictions

Experts believe that artificial intelligence will be better than humans at driving trucks by 2027, working in retail by 2031, writing bestselling books by 2049, and working as surgeons by 2053. But how seriously should we take these predictions?

Katja Grace, lead author of ‘When Will AI Exceed Human Performance?’, thinks we should treat such guesses as only weak evidence. But she also says there might be much better ways to forecast transformative technology, and that anticipating such advances could be one of our most important projects.

Note: Katja's organisation AI Impacts is currently hiring part- and full-time researchers.

There’s often pessimism around making accurate predictions in general, and some areas of artificial intelligence might be particularly difficult to forecast.

But there are also many things we’re able to predict confidently today -- like the climate of Oxford in five years -- that we no longer give ourselves much credit for.

Some aspects of transformative technologies could fall into this category. And these easier predictions could give us some structure on which to base the more complicated ones.

Links to learn more, summary and full transcript.

One controversial debate surrounds the idea of an intelligence explosion; how likely is it that there will be a sudden jump in AI capability?

And one way to tackle this is to investigate a more concrete question: what’s the base rate of any technology having a big discontinuity?

A significant historical example was the development of nuclear weapons. Over thousands of years, the efficacy of explosives didn’t increase by much. Then within a few years, it got thousands of times better. Discovering what leads to such anomalies may allow us to better predict the possibility of a similar jump in AI capabilities.

In today’s interview we also discuss:

* Why is AI impacts one of the most important projects in the world?
* How do you structure important surveys? Why do you get such different answers when asking what seem to be very similar questions?
* How does writing an academic paper differ from posting a summary online?
* When will unguided machines be able to produce better and cheaper work than humans for every possible task?
* What’s one of the most likely jobs to be automated soon?
* Are people always just predicting the same timelines for new technologies?
* How do AGI researchers different from other AI researchers in their predictions?
* What are attitudes to safety research like within ML? Are there regional differences?
* How much should we believe experts generally?
* How does the human brain compare to our best supercomputers? How many human brains are worth all the hardware in the world?
* How quickly has the processing capacity for machine learning problems been increasing?
* What can we learn from the development of previous technologies in figuring out how fast transformative AI will arrive?
* What should we expect from a post AI dominated economy?
* How much influence can people ever have on things that will happen in 20 years? Are there any examples of people really trying to do this?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours podcast is produced by Keiran Harris.

Jaksot(317)

Why 'Aligned AI' Could Still Kill Democracy | David Duvenaud, ex-Anthropic team lead

Why 'Aligned AI' Could Still Kill Democracy | David Duvenaud, ex-Anthropic team lead

Democracy might be a brief historical blip. That’s the unsettling thesis of a recent paper, which argues AI that can do all the work a human can do inevitably leads to the “gradual disempowerment” of ...

27 Tammi 2h 31min

#145 Classic episode – Christopher Brown on why slavery abolition wasn't inevitable

#145 Classic episode – Christopher Brown on why slavery abolition wasn't inevitable

In many ways, humanity seems to have become more humane and inclusive over time. While there’s still a lot of progress to be made, campaigns to give people of different genders, races, sexualities, et...

20 Tammi 2h 56min

#233 – James Smith on how to prevent a mirror life catastrophe

#233 – James Smith on how to prevent a mirror life catastrophe

When James Smith first heard about mirror bacteria, he was sceptical. But within two weeks, he’d dropped everything to work on it full time, considering it the worst biothreat that he’d seen described...

13 Tammi 2h 9min

#144 Classic episode – Athena Aktipis on why cancer is a fundamental universal phenomena

#144 Classic episode – Athena Aktipis on why cancer is a fundamental universal phenomena

What’s the opposite of cancer? If you answered “cure,” “antidote,” or “antivenom” — you’ve obviously been reading the antonym section at www.merriam-webster.com/thesaurus/cancer.But today’s guest Athe...

9 Tammi 3h 30min

#142 Classic episode – John McWhorter on why the optimal number of languages might be one, and other provocative claims about language

#142 Classic episode – John McWhorter on why the optimal number of languages might be one, and other provocative claims about language

John McWhorter is a linguistics professor at Columbia University specialising in research on creole languages. He's also a content-producing machine, never afraid to give his frank opinion on anything...

6 Tammi 1h 35min

2025 Highlight-o-thon: Oops! All Bests

2025 Highlight-o-thon: Oops! All Bests

It’s that magical time of year once again — highlightapalooza! Stick around for one top bit from each episode we recorded this year, including:Kyle Fish explaining how Anthropic’s AI Claude descends i...

29 Joulu 20251h 40min

#232 – Andreas Mogensen on what we owe 'philosophical Vulcans' and unconscious beings

#232 – Andreas Mogensen on what we owe 'philosophical Vulcans' and unconscious beings

Most debates about the moral status of AI systems circle the same question: is there something that it feels like to be them? But what if that’s the wrong question to ask? Andreas Mogensen — a senior ...

19 Joulu 20252h 37min

#231 – Paul Scharre on how AI-controlled robots will and won't change war

#231 – Paul Scharre on how AI-controlled robots will and won't change war

In 1983, Stanislav Petrov, a Soviet lieutenant colonel, sat in a bunker watching a red screen flash “MISSILE LAUNCH.” Protocol demanded he report it to superiors, which would very likely trigger a ret...

17 Joulu 20252h 45min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
voi-hyvin-meditaatiot-2
rss-valo-minussa-2
rss-narsisti
psykologia
adhd-podi
rss-duodecim-lehti
salainen-paivakirja
rss-liian-kuuma-peruna
rahapuhetta
rss-niinku-asia-on
kesken
rss-luonnollinen-synnytys-podcast
rss-vapaudu-voimaasi
aamukahvilla
aloita-meditaatio
mielipaivakirja
rss-uskonto-on-tylsaa
rss-rahataito-podcast