#191 (Part 2) – Carl Shulman on government and society after AGI

#191 (Part 2) – Carl Shulman on government and society after AGI

This is the second part of our marathon interview with Carl Shulman. The first episode is on the economy and national security after AGI. You can listen to them in either order!

If we develop artificial general intelligence that's reasonably aligned with human goals, it could put a fast and near-free superhuman advisor in everyone's pocket. How would that affect culture, government, and our ability to act sensibly and coordinate together?

It's common to worry that AI advances will lead to a proliferation of misinformation and further disconnect us from reality. But in today's conversation, AI expert Carl Shulman argues that this underrates the powerful positive applications the technology could have in the public sphere.

Links to learn more, highlights, and full transcript.

As Carl explains, today the most important questions we face as a society remain in the "realm of subjective judgement" -- without any "robust, well-founded scientific consensus on how to answer them." But if AI 'evals' and interpretability advance to the point that it's possible to demonstrate which AI models have truly superhuman judgement and give consistently trustworthy advice, society could converge on firm or 'best-guess' answers to far more cases.

If the answers are publicly visible and confirmable by all, the pressure on officials to act on that advice could be great.

That's because when it's hard to assess if a line has been crossed or not, we usually give people much more discretion. For instance, a journalist inventing an interview that never happened will get fired because it's an unambiguous violation of honesty norms — but so long as there's no universally agreed-upon standard for selective reporting, that same journalist will have substantial discretion to report information that favours their preferred view more often than that which contradicts it.

Similarly, today we have no generally agreed-upon way to tell when a decision-maker has behaved irresponsibly. But if experience clearly shows that following AI advice is the wise move, not seeking or ignoring such advice could become more like crossing a red line — less like making an understandable mistake and more like fabricating your balance sheet.

To illustrate the possible impact, Carl imagines how the COVID pandemic could have played out in the presence of AI advisors that everyone agrees are exceedingly insightful and reliable. But in practice, a significantly superhuman AI might suggest novel approaches better than any we can suggest.

In the past we've usually found it easier to predict how hard technologies like planes or factories will change than to imagine the social shifts that those technologies will create — and the same is likely happening for AI.

Carl Shulman and host Rob Wiblin discuss the above, as well as:

  • The risk of society using AI to lock in its values.
  • The difficulty of preventing coups once AI is key to the military and police.
  • What international treaties we need to make this go well.
  • How to make AI superhuman at forecasting the future.
  • Whether AI will be able to help us with intractable philosophical questions.
  • Whether we need dedicated projects to make wise AI advisors, or if it will happen automatically as models scale.
  • Why Carl doesn't support AI companies voluntarily pausing AI research, but sees a stronger case for binding international controls once we're closer to 'crunch time.'
  • Opportunities for listeners to contribute to making the future go well.

Chapters:

  • Cold open (00:00:00)
  • Rob’s intro (00:01:16)
  • The interview begins (00:03:24)
  • COVID-19 concrete example (00:11:18)
  • Sceptical arguments against the effect of AI advisors (00:24:16)
  • Value lock-in (00:33:59)
  • How democracies avoid coups (00:48:08)
  • Where AI could most easily help (01:00:25)
  • AI forecasting (01:04:30)
  • Application to the most challenging topics (01:24:03)
  • How to make it happen (01:37:50)
  • International negotiations and coordination and auditing (01:43:54)
  • Opportunities for listeners (02:00:09)
  • Why Carl doesn't support enforced pauses on AI research (02:03:58)
  • How Carl is feeling about the future (02:15:47)
  • Rob’s outro (02:17:37)


Producer and editor: Keiran Harris

Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong

Transcriptions: Katy Moore

Jaksot(326)

#21 - Holden Karnofsky on times philanthropy transformed the world & Open Phil’s plan to do the same

#21 - Holden Karnofsky on times philanthropy transformed the world & Open Phil’s plan to do the same

The Green Revolution averted mass famine during the 20th century. The contraceptive pill gave women unprecedented freedom in planning their own lives. Both are widely recognised as scientific breakthr...

27 Helmi 20182h 35min

#20 - Bruce Friedrich on inventing outstanding meat substitutes to end speciesism & factory farming

#20 - Bruce Friedrich on inventing outstanding meat substitutes to end speciesism & factory farming

Before the US Civil War, it was easier for the North to morally oppose slavery. Why? Because unlike the South they weren’t profiting much from its existence. The fight for abolition was partly won bec...

19 Helmi 20181h 18min

#19 - Samantha Pitts-Kiefer on working next to the White House trying to prevent nuclear war

#19 - Samantha Pitts-Kiefer on working next to the White House trying to prevent nuclear war

Rogue elements within a state’s security forces enrich dozens of kilograms of uranium. It’s then assembled into a crude nuclear bomb. The bomb is transported on a civilian aircraft to Washington D.C, ...

14 Helmi 20181h 4min

#18 - Ofir Reich on using data science to end poverty & the spurious action-inaction distinction

#18 - Ofir Reich on using data science to end poverty & the spurious action-inaction distinction

Ofir Reich started out doing math in the military, before spending 8 years in tech startups - but then made a sharp turn to become a data scientist focussed on helping the global poor. At UC Berkeley...

31 Tammi 20181h 18min

#17 - Will MacAskill on moral uncertainty, utilitarianism & how to avoid being a moral monster

#17 - Will MacAskill on moral uncertainty, utilitarianism & how to avoid being a moral monster

Immanuel Kant is a profoundly influential figure in modern philosophy, and was one of the earliest proponents for universal democracy and international cooperation. He also thought that women have no ...

19 Tammi 20181h 52min

#16 - Michelle Hutchinson on global priorities research & shaping the ideas of intellectuals

#16 - Michelle Hutchinson on global priorities research & shaping the ideas of intellectuals

In the 40s and 50s neoliberalism was a fringe movement within economics. But by the 80s it had become a dominant school of thought in public policy, and achieved major policy changes across the Englis...

22 Joulu 201755min

#15 - Phil Tetlock on how chimps beat Berkeley undergrads and when it’s wise to defer to the wise

#15 - Phil Tetlock on how chimps beat Berkeley undergrads and when it’s wise to defer to the wise

Prof Philip Tetlock is a social science legend. Over forty years he has researched whose predictions we can trust, whose we can’t and why - and developed methods that allow all of us to be better at p...

20 Marras 20171h 24min

#14 - Sharon Nunez & Jose Valle on going undercover to expose animal abuse

#14 - Sharon Nunez & Jose Valle on going undercover to expose animal abuse

What if you knew that ducks were being killed with pitchforks? Rabbits dumped alive into containers? Or pigs being strangled with forklifts? Would you be willing to go undercover to expose the crime? ...

13 Marras 20171h 25min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
voi-hyvin-meditaatiot-2
rss-narsisti
adhd-podi
psykopodiaa-podcast
rss-uskonto-on-tylsaa
rss-rahamania
rss-duodecim-lehti
rss-valo-minussa-2
rss-vapaudu-voimaasi
rss-liian-kuuma-peruna
rahapuhetta
rss-niinku-asia-on
aloita-meditaatio
kesken
dear-ladies
mielipaivakirja
rss-eron-alkemiaa
rss-tietoinen-yhteys-podcast-2
aamukahvilla