#133 – Max Tegmark on how a 'put-up-or-shut-up' resolution led him to work on AI and algorithmic news selection

#133 – Max Tegmark on how a 'put-up-or-shut-up' resolution led him to work on AI and algorithmic news selection

On January 1, 2015, physicist Max Tegmark gave up something most of us love to do: complain about things without ever trying to fix them.

That “put up or shut up” New Year’s resolution led to the first Puerto Rico conference and Open Letter on Artificial Intelligence — milestones for researchers taking the safe development of highly-capable AI systems seriously.

Links to learn more, summary and full transcript.

Max's primary work has been cosmology research at MIT, but his energetic and freewheeling nature has led him into so many other projects that you would be forgiven for forgetting it. In the 2010s he wrote two best-selling books, Our Mathematical Universe: My Quest for the Ultimate Nature of Reality, and Life 3.0: Being Human in the Age of Artificial Intelligence, and in 2014 founded a non-profit, the Future of Life Institute, which works to reduce all sorts of threats to humanity's future including nuclear war, synthetic biology, and AI.

Max has complained about many other things over the years, from killer robots to the impact of social media algorithms on the news we consume. True to his 'put up or shut up' resolution, he and his team went on to produce a video on so-called ‘Slaughterbots’ which attracted millions of views, and develop a website called 'Improve The News' to help readers separate facts from spin.

But given the stunning recent advances in capabilities — from OpenAI’s DALL-E to DeepMind’s Gato — AI itself remains top of his mind.

You can now give an AI system like GPT-3 the text: "I'm going to go to this mountain with the faces on it. What is the capital of the state to the east of the state that that's in?" And it gives the correct answer (Saint Paul, Minnesota) — something most AI researchers would have said was impossible without fundamental breakthroughs just seven years ago.

So back at MIT, he now leads a research group dedicated to what he calls “intelligible intelligence.” At the moment, AI systems are basically giant black boxes that magically do wildly impressive things. But for us to trust these systems, we need to understand them.

He says that training a black box that does something smart needs to just be stage one in a bigger process. Stage two is: “How do we get the knowledge out and put it in a safer system?”

Today’s conversation starts off giving a broad overview of the key questions about artificial intelligence: What's the potential? What are the threats? How might this story play out? What should we be doing to prepare?

Rob and Max then move on to recent advances in capabilities and alignment, the mood we should have, and possible ways we might misunderstand the problem.

They then spend roughly the last third talking about Max's current big passion: improving the news we consume — where Rob has a few reservations.

They also cover:

• Whether we could understand what superintelligent systems were doing
• The value of encouraging people to think about the positive future they want
• How to give machines goals
• Whether ‘Big Tech’ is following the lead of ‘Big Tobacco’
• Whether we’re sleepwalking into disaster
• Whether people actually just want their biases confirmed
• Why Max is worried about government-backed fact-checking
• And much more

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:01:19)
  • How Max prioritises (00:12:33)
  • Intro to AI risk (00:15:47)
  • Superintelligence (00:35:56)
  • Imagining a wide range of possible futures (00:47:45)
  • Recent advances in capabilities and alignment (00:57:37)
  • How to give machines goals (01:13:13)
  • Regulatory capture (01:21:03)
  • How humanity fails to fulfil its potential (01:39:45)
  • Are we being hacked? (01:51:01)
  • Improving the news (02:05:31)
  • Do people actually just want their biases confirmed? (02:16:15)
  • Government-backed fact-checking (02:37:00)
  • Would a superintelligence seem like magic? (02:49:50)


Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore

Avsnitt(320)

We now offer shorter 'interview highlights' episodes

We now offer shorter 'interview highlights' episodes

Over on our other feed, 80k After Hours, you can now find 20-30 minute highlights episodes of our 80,000 Hours Podcast interviews. These aren’t necessarily the most important parts of the interview, a...

5 Aug 20236min

#158 – Holden Karnofsky on how AIs might take over even if they're no smarter than humans, and his 4-part playbook for AI risk

#158 – Holden Karnofsky on how AIs might take over even if they're no smarter than humans, and his 4-part playbook for AI risk

Back in 2007, Holden Karnofsky cofounded GiveWell, where he sought out the charities that most cost-effectively helped save lives. He then cofounded Open Philanthropy, where he oversaw a team making b...

31 Juli 20233h 13min

#157 – Ezra Klein on existential risk from AI and what DC could do about it

#157 – Ezra Klein on existential risk from AI and what DC could do about it

In Oppenheimer, scientists detonate a nuclear weapon despite thinking there's some 'near zero' chance it would ignite the atmosphere, putting an end to life on Earth. Today, scientists working on AI t...

24 Juli 20231h 18min

#156 – Markus Anderljung on how to regulate cutting-edge AI models

#156 – Markus Anderljung on how to regulate cutting-edge AI models

"At the front of the pack we have these frontier AI developers, and we want them to identify particularly dangerous models ahead of time. Once those mines have been discovered, and the frontier develo...

10 Juli 20232h 6min

Bonus: The Worst Ideas in the History of the World

Bonus: The Worst Ideas in the History of the World

Today’s bonus release is a pilot for a new podcast called ‘The Worst Ideas in the History of the World’, created by Keiran Harris — producer of the 80,000 Hours Podcast.If you have strong opinions abo...

30 Juni 202335min

#155 – Lennart Heim on the compute governance era and what has to come after

#155 – Lennart Heim on the compute governance era and what has to come after

As AI advances ever more quickly, concerns about potential misuse of highly capable models are growing. From hostile foreign governments and terrorists to reckless entrepreneurs, the threat of AI fall...

22 Juni 20233h 12min

#154 - Rohin Shah on DeepMind and trying to fairly hear out both AI doomers and doubters

#154 - Rohin Shah on DeepMind and trying to fairly hear out both AI doomers and doubters

Can there be a more exciting and strange place to work today than a leading AI lab? Your CEO has said they're worried your research could cause human extinction. The government is setting up meetings ...

9 Juni 20233h 9min

#153 – Elie Hassenfeld on 2 big picture critiques of GiveWell's approach, and 6 lessons from their recent work

#153 – Elie Hassenfeld on 2 big picture critiques of GiveWell's approach, and 6 lessons from their recent work

GiveWell is one of the world's best-known charity evaluators, with the goal of "searching for the charities that save or improve lives the most per dollar." It mostly recommends projects that help the...

2 Juni 20232h 56min

Populärt inom Utbildning

rss-bara-en-till-om-missbruk-medberoende-2
historiepodden-se
det-skaver
nu-blir-det-historia
harrisons-dramatiska-historia
rss-viktmedicinpodden
johannes-hansen-podcast
not-fanny-anymore
roda-vita-rosen
allt-du-velat-veta
sektledare
rss-sjalsligt-avkladd
i-vantan-pa-katastrofen
alska-oss
rss-beratta-alltid-det-har
sa-in-i-sjalen
rss-max-tant-med-max-villman
dumforklarat
rikatillsammans-om-privatekonomi-rikedom-i-livet
rss-basta-livet