#40 - Katja Grace on forecasting future technology & how much we should trust expert predictions

#40 - Katja Grace on forecasting future technology & how much we should trust expert predictions

Experts believe that artificial intelligence will be better than humans at driving trucks by 2027, working in retail by 2031, writing bestselling books by 2049, and working as surgeons by 2053. But how seriously should we take these predictions?

Katja Grace, lead author of ‘When Will AI Exceed Human Performance?’, thinks we should treat such guesses as only weak evidence. But she also says there might be much better ways to forecast transformative technology, and that anticipating such advances could be one of our most important projects.

Note: Katja's organisation AI Impacts is currently hiring part- and full-time researchers.

There’s often pessimism around making accurate predictions in general, and some areas of artificial intelligence might be particularly difficult to forecast.

But there are also many things we’re able to predict confidently today -- like the climate of Oxford in five years -- that we no longer give ourselves much credit for.

Some aspects of transformative technologies could fall into this category. And these easier predictions could give us some structure on which to base the more complicated ones.

Links to learn more, summary and full transcript.

One controversial debate surrounds the idea of an intelligence explosion; how likely is it that there will be a sudden jump in AI capability?

And one way to tackle this is to investigate a more concrete question: what’s the base rate of any technology having a big discontinuity?

A significant historical example was the development of nuclear weapons. Over thousands of years, the efficacy of explosives didn’t increase by much. Then within a few years, it got thousands of times better. Discovering what leads to such anomalies may allow us to better predict the possibility of a similar jump in AI capabilities.

In today’s interview we also discuss:

* Why is AI impacts one of the most important projects in the world?
* How do you structure important surveys? Why do you get such different answers when asking what seem to be very similar questions?
* How does writing an academic paper differ from posting a summary online?
* When will unguided machines be able to produce better and cheaper work than humans for every possible task?
* What’s one of the most likely jobs to be automated soon?
* Are people always just predicting the same timelines for new technologies?
* How do AGI researchers different from other AI researchers in their predictions?
* What are attitudes to safety research like within ML? Are there regional differences?
* How much should we believe experts generally?
* How does the human brain compare to our best supercomputers? How many human brains are worth all the hardware in the world?
* How quickly has the processing capacity for machine learning problems been increasing?
* What can we learn from the development of previous technologies in figuring out how fast transformative AI will arrive?
* What should we expect from a post AI dominated economy?
* How much influence can people ever have on things that will happen in 20 years? Are there any examples of people really trying to do this?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours podcast is produced by Keiran Harris.

Episoder(325)

AI might let a few people control everything — permanently (article by Rose Hadshar)

AI might let a few people control everything — permanently (article by Rose Hadshar)

Power is already concentrated today: over 800 million people live on less than $3 a day, the three richest men in the world are worth over $1 trillion, and almost six billion people live in countries ...

12 Des 20251h

#230 – Dean Ball on how AI is a huge deal — but we shouldn’t regulate it yet

#230 – Dean Ball on how AI is a huge deal — but we shouldn’t regulate it yet

Former White House staffer Dean Ball thinks it's very likely some form of 'superintelligence' arrives in under 20 years. He thinks AI being used for bioweapon research is "a real threat model, obvious...

10 Des 20252h 54min

#229 – Marius Hobbhahn on the race to solve AI scheming before models go superhuman

#229 – Marius Hobbhahn on the race to solve AI scheming before models go superhuman

We often worry about AI models “hallucinating” or making honest mistakes. But what happens when a model knows the truth, but decides to deceive you anyway to achieve a goal of its own? This isn’t sci-...

3 Des 20253h 3min

Rob & Luisa chat kids, the 2016 fertility crash, and how the 50s invented parenting that makes us miserable

Rob & Luisa chat kids, the 2016 fertility crash, and how the 50s invented parenting that makes us miserable

Global fertility rates aren’t just falling: the rate of decline is accelerating. From 2006 to 2016, fertility dropped gradually, but since 2016 the rate of decline has increased 4.5-fold. In many weal...

25 Nov 20251h 59min

#228 – Eileen Yam on how we're completely out of touch with what the public thinks about AI

#228 – Eileen Yam on how we're completely out of touch with what the public thinks about AI

If you work in AI, you probably think it’s going to boost productivity, create wealth, advance science, and improve your life. If you’re a member of the American public, you probably strongly disagree...

20 Nov 20251h 43min

OpenAI: The nonprofit refuses to be killed (with Tyler Whitmer)

OpenAI: The nonprofit refuses to be killed (with Tyler Whitmer)

Last December, the OpenAI business put forward a plan to completely sideline its nonprofit board. But two state attorneys general have now blocked that effort and kept that board very much alive and k...

11 Nov 20251h 56min

#227 – Helen Toner on the geopolitics of AGI in China and the Middle East

#227 – Helen Toner on the geopolitics of AGI in China and the Middle East

With the US racing to develop AGI and superintelligence ahead of China, you might expect the two countries to be negotiating how they’ll deploy AI, including in the military, without coming to blows. ...

5 Nov 20252h 20min

#226 – Holden Karnofsky on unexploited opportunities to make AI safer — and all his AGI takes

#226 – Holden Karnofsky on unexploited opportunities to make AI safer — and all his AGI takes

For years, working on AI safety usually meant theorising about the ‘alignment problem’ or trying to convince other people to give a damn. If you could find any way to help, the work was frustrating an...

30 Okt 20254h 30min

Populært innen Fakta

fastlegen
dine-penger-pengeradet
relasjonspodden-med-dora-thorhallsdottir-kjersti-idem
treningspodden
foreldreradet
rss-strid-de-norske-borgerkrigene
rss-sunn-okonomi
jakt-og-fiskepodden
sinnsyn
takk-og-lov-med-anine-kierulf
rss-kunsten-a-leve
gravid-uke-for-uke
merry-quizmas
hverdagspsyken
smart-forklart
rss-kull
fryktlos
hagespiren-podcast
rss-var-forste-kaffe
rss-mann-i-krise-med-sagen