#133 – Max Tegmark on how a 'put-up-or-shut-up' resolution led him to work on AI and algorithmic news selection

#133 – Max Tegmark on how a 'put-up-or-shut-up' resolution led him to work on AI and algorithmic news selection

On January 1, 2015, physicist Max Tegmark gave up something most of us love to do: complain about things without ever trying to fix them.

That “put up or shut up” New Year’s resolution led to the first Puerto Rico conference and Open Letter on Artificial Intelligence — milestones for researchers taking the safe development of highly-capable AI systems seriously.

Links to learn more, summary and full transcript.

Max's primary work has been cosmology research at MIT, but his energetic and freewheeling nature has led him into so many other projects that you would be forgiven for forgetting it. In the 2010s he wrote two best-selling books, Our Mathematical Universe: My Quest for the Ultimate Nature of Reality, and Life 3.0: Being Human in the Age of Artificial Intelligence, and in 2014 founded a non-profit, the Future of Life Institute, which works to reduce all sorts of threats to humanity's future including nuclear war, synthetic biology, and AI.

Max has complained about many other things over the years, from killer robots to the impact of social media algorithms on the news we consume. True to his 'put up or shut up' resolution, he and his team went on to produce a video on so-called ‘Slaughterbots’ which attracted millions of views, and develop a website called 'Improve The News' to help readers separate facts from spin.

But given the stunning recent advances in capabilities — from OpenAI’s DALL-E to DeepMind’s Gato — AI itself remains top of his mind.

You can now give an AI system like GPT-3 the text: "I'm going to go to this mountain with the faces on it. What is the capital of the state to the east of the state that that's in?" And it gives the correct answer (Saint Paul, Minnesota) — something most AI researchers would have said was impossible without fundamental breakthroughs just seven years ago.

So back at MIT, he now leads a research group dedicated to what he calls “intelligible intelligence.” At the moment, AI systems are basically giant black boxes that magically do wildly impressive things. But for us to trust these systems, we need to understand them.

He says that training a black box that does something smart needs to just be stage one in a bigger process. Stage two is: “How do we get the knowledge out and put it in a safer system?”

Today’s conversation starts off giving a broad overview of the key questions about artificial intelligence: What's the potential? What are the threats? How might this story play out? What should we be doing to prepare?

Rob and Max then move on to recent advances in capabilities and alignment, the mood we should have, and possible ways we might misunderstand the problem.

They then spend roughly the last third talking about Max's current big passion: improving the news we consume — where Rob has a few reservations.

They also cover:

• Whether we could understand what superintelligent systems were doing
• The value of encouraging people to think about the positive future they want
• How to give machines goals
• Whether ‘Big Tech’ is following the lead of ‘Big Tobacco’
• Whether we’re sleepwalking into disaster
• Whether people actually just want their biases confirmed
• Why Max is worried about government-backed fact-checking
• And much more

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:01:19)
  • How Max prioritises (00:12:33)
  • Intro to AI risk (00:15:47)
  • Superintelligence (00:35:56)
  • Imagining a wide range of possible futures (00:47:45)
  • Recent advances in capabilities and alignment (00:57:37)
  • How to give machines goals (01:13:13)
  • Regulatory capture (01:21:03)
  • How humanity fails to fulfil its potential (01:39:45)
  • Are we being hacked? (01:51:01)
  • Improving the news (02:05:31)
  • Do people actually just want their biases confirmed? (02:16:15)
  • Government-backed fact-checking (02:37:00)
  • Would a superintelligence seem like magic? (02:49:50)


Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore

Avsnitt(320)

#112 Classic episode – Carl Shulman on the common-sense case for existential risk work and its practical implications

#112 Classic episode – Carl Shulman on the common-sense case for existential risk work and its practical implications

Preventing the apocalypse may sound like an idiosyncratic activity, and it sometimes is justified on exotic grounds, such as the potential for humanity to become a galaxy-spanning civilisation.But the...

8 Jan 20243h 50min

#111 Classic episode – Mushtaq Khan on using institutional economics to predict effective government reforms

#111 Classic episode – Mushtaq Khan on using institutional economics to predict effective government reforms

If you’re living in the Niger Delta in Nigeria, your best bet at a high-paying career is probably ‘artisanal refining’ — or, in plain language, stealing oil from pipelines.The resulting oil spills dam...

4 Jan 20243h 22min

2023 Mega-highlights Extravaganza

2023 Mega-highlights Extravaganza

Happy new year! We've got a different kind of holiday release for you today. Rather than a 'classic episode,' we've put together one of our favourite highlights from each episode of the show that came...

31 Dec 20231h 53min

#100 Classic episode – Having a successful career with depression, anxiety, and imposter syndrome

#100 Classic episode – Having a successful career with depression, anxiety, and imposter syndrome

Today’s episode is one of the most remarkable and really, unique, pieces of content we’ve ever produced (and I can say that because I had almost nothing to do with making it!).The producer of this sho...

27 Dec 20232h 51min

#176 – Nathan Labenz on the final push for AGI, understanding OpenAI's leadership drama, and red-teaming frontier models

#176 – Nathan Labenz on the final push for AGI, understanding OpenAI's leadership drama, and red-teaming frontier models

OpenAI says its mission is to build AGI — an AI system that is better than human beings at everything. Should the world trust them to do that safely?That’s the central theme of today’s episode with Na...

22 Dec 20233h 46min

#175 – Lucia Coulter on preventing lead poisoning for $1.66 per child

#175 – Lucia Coulter on preventing lead poisoning for $1.66 per child

Lead is one of the most poisonous things going. A single sugar sachet of lead, spread over a park the size of an American football field, is enough to give a child that regularly plays there lead pois...

14 Dec 20232h 14min

#174 – Nita Farahany on the neurotechnology already being used to convict criminals and manipulate workers

#174 – Nita Farahany on the neurotechnology already being used to convict criminals and manipulate workers

"It will change everything: it will change our workplaces, it will change our interactions with the government, it will change our interactions with each other. It will make all of us unwitting neurom...

7 Dec 20232h

#173 – Jeff Sebo on digital minds, and how to avoid sleepwalking into a major moral catastrophe

#173 – Jeff Sebo on digital minds, and how to avoid sleepwalking into a major moral catastrophe

"We do have a tendency to anthropomorphise nonhumans — which means attributing human characteristics to them, even when they lack those characteristics. But we also have a tendency towards anthropoden...

22 Nov 20232h 38min

Populärt inom Utbildning

rss-bara-en-till-om-missbruk-medberoende-2
historiepodden-se
det-skaver
nu-blir-det-historia
harrisons-dramatiska-historia
rss-viktmedicinpodden
johannes-hansen-podcast
not-fanny-anymore
roda-vita-rosen
allt-du-velat-veta
sektledare
rss-sjalsligt-avkladd
i-vantan-pa-katastrofen
alska-oss
rss-beratta-alltid-det-har
sa-in-i-sjalen
rss-max-tant-med-max-villman
dumforklarat
rikatillsammans-om-privatekonomi-rikedom-i-livet
rss-basta-livet