#133 – Max Tegmark on how a 'put-up-or-shut-up' resolution led him to work on AI and algorithmic news selection

#133 – Max Tegmark on how a 'put-up-or-shut-up' resolution led him to work on AI and algorithmic news selection

On January 1, 2015, physicist Max Tegmark gave up something most of us love to do: complain about things without ever trying to fix them.

That “put up or shut up” New Year’s resolution led to the first Puerto Rico conference and Open Letter on Artificial Intelligence — milestones for researchers taking the safe development of highly-capable AI systems seriously.

Links to learn more, summary and full transcript.

Max's primary work has been cosmology research at MIT, but his energetic and freewheeling nature has led him into so many other projects that you would be forgiven for forgetting it. In the 2010s he wrote two best-selling books, Our Mathematical Universe: My Quest for the Ultimate Nature of Reality, and Life 3.0: Being Human in the Age of Artificial Intelligence, and in 2014 founded a non-profit, the Future of Life Institute, which works to reduce all sorts of threats to humanity's future including nuclear war, synthetic biology, and AI.

Max has complained about many other things over the years, from killer robots to the impact of social media algorithms on the news we consume. True to his 'put up or shut up' resolution, he and his team went on to produce a video on so-called ‘Slaughterbots’ which attracted millions of views, and develop a website called 'Improve The News' to help readers separate facts from spin.

But given the stunning recent advances in capabilities — from OpenAI’s DALL-E to DeepMind’s Gato — AI itself remains top of his mind.

You can now give an AI system like GPT-3 the text: "I'm going to go to this mountain with the faces on it. What is the capital of the state to the east of the state that that's in?" And it gives the correct answer (Saint Paul, Minnesota) — something most AI researchers would have said was impossible without fundamental breakthroughs just seven years ago.

So back at MIT, he now leads a research group dedicated to what he calls “intelligible intelligence.” At the moment, AI systems are basically giant black boxes that magically do wildly impressive things. But for us to trust these systems, we need to understand them.

He says that training a black box that does something smart needs to just be stage one in a bigger process. Stage two is: “How do we get the knowledge out and put it in a safer system?”

Today’s conversation starts off giving a broad overview of the key questions about artificial intelligence: What's the potential? What are the threats? How might this story play out? What should we be doing to prepare?

Rob and Max then move on to recent advances in capabilities and alignment, the mood we should have, and possible ways we might misunderstand the problem.

They then spend roughly the last third talking about Max's current big passion: improving the news we consume — where Rob has a few reservations.

They also cover:

• Whether we could understand what superintelligent systems were doing
• The value of encouraging people to think about the positive future they want
• How to give machines goals
• Whether ‘Big Tech’ is following the lead of ‘Big Tobacco’
• Whether we’re sleepwalking into disaster
• Whether people actually just want their biases confirmed
• Why Max is worried about government-backed fact-checking
• And much more

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:01:19)
  • How Max prioritises (00:12:33)
  • Intro to AI risk (00:15:47)
  • Superintelligence (00:35:56)
  • Imagining a wide range of possible futures (00:47:45)
  • Recent advances in capabilities and alignment (00:57:37)
  • How to give machines goals (01:13:13)
  • Regulatory capture (01:21:03)
  • How humanity fails to fulfil its potential (01:39:45)
  • Are we being hacked? (01:51:01)
  • Improving the news (02:05:31)
  • Do people actually just want their biases confirmed? (02:16:15)
  • Government-backed fact-checking (02:37:00)
  • Would a superintelligence seem like magic? (02:49:50)


Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore

Jaksot(320)

#127 – Sam Bankman-Fried on taking a high-risk approach to crypto and doing good

#127 – Sam Bankman-Fried on taking a high-risk approach to crypto and doing good

On this episode of the show, host Rob Wiblin interviews Sam Bankman-Fried. This interview was recorded in February 2022, and released in April 2022. But on November 11 2022, Sam Bankman-Fried's co...

14 Huhti 20223h 20min

#126 – Bryan Caplan on whether lazy parenting is OK, what really helps workers, and betting on beliefs

#126 – Bryan Caplan on whether lazy parenting is OK, what really helps workers, and betting on beliefs

Everybody knows that good parenting has a big impact on how kids turn out. Except that maybe they don't, because it doesn't.Incredible though it might seem, according to today's guest — economist Brya...

5 Huhti 20222h 15min

#125 – Joan Rohlfing on how to avoid catastrophic nuclear blunders

#125 – Joan Rohlfing on how to avoid catastrophic nuclear blunders

Since the Soviet Union split into different countries in 1991, the pervasive fear of catastrophe that people lived with for decades has gradually faded from memory, and nuclear warhead stockpiles have...

29 Maalis 20222h 13min

#124 – Karen Levy on fads and misaligned incentives in global development, and scaling deworming to reach hundreds of millions

#124 – Karen Levy on fads and misaligned incentives in global development, and scaling deworming to reach hundreds of millions

If someone said a global health and development programme was sustainable, participatory, and holistic, you'd have to guess that they were saying something positive. But according to today's guest Kar...

21 Maalis 20223h 9min

#123 – Samuel Charap on why Putin invaded Ukraine, the risk of escalation, and how to prevent disaster

#123 – Samuel Charap on why Putin invaded Ukraine, the risk of escalation, and how to prevent disaster

Russia's invasion of Ukraine is devastating the lives of Ukrainians, and so long as it continues there's a risk that the conflict could escalate to include other countries or the use of nuclear weapon...

14 Maalis 202259min

#122 – Michelle Hutchinson & Habiba Islam on balancing competing priorities and other themes from our 1-on-1 careers advising

#122 – Michelle Hutchinson & Habiba Islam on balancing competing priorities and other themes from our 1-on-1 careers advising

One of 80,000 Hours' main services is our free one-on-one careers advising, which we provide to around 1,000 people a year. Today we speak to two of our advisors, who have each spoken to hundreds of p...

9 Maalis 20221h 36min

Introducing 80k After Hours

Introducing 80k After Hours

Today we're launching a new podcast called 80k After Hours. Like this show it’ll mostly still explore the best ways to do good — and some episodes will be even more laser-focused on careers than mos...

1 Maalis 202213min

#121 – Matthew Yglesias on avoiding the pundit's fallacy and how much military intervention can be used for good

#121 – Matthew Yglesias on avoiding the pundit's fallacy and how much military intervention can be used for good

If you read polls saying that the public supports a carbon tax, should you believe them? According to today's guest — journalist and blogger Matthew Yglesias — it's complicated, but probably not. Link...

16 Helmi 20223h 4min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
rss-narsisti
voi-hyvin-meditaatiot-2
aamukahvilla
rss-vapaudu-voimaasi
rss-niinku-asia-on
psykologia
rss-liian-kuuma-peruna
adhd-podi
kesken
dear-ladies
leveli
rss-duodecim-lehti
rss-koira-haudattuna
rss-luonnollinen-synnytys-podcast
rahapuhetta
aloita-meditaatio
ihminen-tavattavissa-tommy-hellsten-instituutti
rss-ai-mita-siskopodcast