#81 Classic episode - Ben Garfinkel on scrutinising classic AI risk arguments

#81 Classic episode - Ben Garfinkel on scrutinising classic AI risk arguments

Rebroadcast: this episode was originally released in July 2020.

80,000 Hours, along with many other members of the effective altruism movement, has argued that helping to positively shape the development of artificial intelligence may be one of the best ways to have a lasting, positive impact on the long-term future. Millions of dollars in philanthropic spending, as well as lots of career changes, have been motivated by these arguments.

Today’s guest, Ben Garfinkel, Research Fellow at Oxford’s Future of Humanity Institute, supports the continued expansion of AI safety as a field and believes working on AI is among the very best ways to have a positive impact on the long-term future. But he also believes the classic AI risk arguments have been subject to insufficient scrutiny given this level of investment.

In particular, the case for working on AI if you care about the long-term future has often been made on the basis of concern about AI accidents; it’s actually quite difficult to design systems that you can feel confident will behave the way you want them to in all circumstances.

Nick Bostrom wrote the most fleshed out version of the argument in his book, Superintelligence. But Ben reminds us that, apart from Bostrom’s book and essays by Eliezer Yudkowsky, there's very little existing writing on existential accidents.

Links to learn more, summary and full transcript.

There have also been very few skeptical experts that have actually sat down and fully engaged with it, writing down point by point where they disagree or where they think the mistakes are. This means that Ben has probably scrutinised classic AI risk arguments as carefully as almost anyone else in the world.

He thinks that most of the arguments for existential accidents often rely on fuzzy, abstract concepts like optimisation power or general intelligence or goals, and toy thought experiments. And he doesn’t think it’s clear we should take these as a strong source of evidence.

Ben’s also concerned that these scenarios often involve massive jumps in the capabilities of a single system, but it's really not clear that we should expect such jumps or find them plausible. These toy examples also focus on the idea that because human preferences are so nuanced and so hard to state precisely, it should be quite difficult to get a machine that can understand how to obey them.

But Ben points out that it's also the case in machine learning that we can train lots of systems to engage in behaviours that are actually quite nuanced and that we can't specify precisely. If AI systems can recognise faces from images, and fly helicopters, why don’t we think they’ll be able to understand human preferences?

Despite these concerns, Ben is still fairly optimistic about the value of working on AI safety or governance.

He doesn’t think that there are any slam-dunks for improving the future, and so the fact that there are at least plausible pathways for impact by working on AI safety and AI governance, in addition to it still being a very neglected area, puts it head and shoulders above most areas you might choose to work in.

This is the second episode hosted by Howie Lempel, and he and Ben cover, among many other things:

• The threat of AI systems increasing the risk of permanently damaging conflict or collapse
• The possibility of permanently locking in a positive or negative future
• Contenders for types of advanced systems
• What role AI should play in the effective altruism portfolio

Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcript for this episode: Zakee Ulhaq.

Jaksot(325)

#68 - Will MacAskill on the paralysis argument, whether we're at the hinge of history, & his new priorities

#68 - Will MacAskill on the paralysis argument, whether we're at the hinge of history, & his new priorities

You’re given a box with a set of dice in it. If you roll an even number, a person's life is saved. If you roll an odd number, someone else will die. Each time you shake the box you get $10. Should you...

24 Tammi 20203h 25min

#44 Classic episode - Paul Christiano on finding real solutions to the AI alignment problem

#44 Classic episode - Paul Christiano on finding real solutions to the AI alignment problem

Rebroadcast: this episode was originally released in October 2018. Paul Christiano is one of the smartest people I know. After our first session produced such great material, we decided to do a seco...

15 Tammi 20203h 51min

#33 Classic episode - Anders Sandberg on cryonics, solar flares, and the annual odds of nuclear war

#33 Classic episode - Anders Sandberg on cryonics, solar flares, and the annual odds of nuclear war

Rebroadcast: this episode was originally released in May 2018. Joseph Stalin had a life-extension program dedicated to making himself immortal. What if he had succeeded? According to Bryan Caplan ...

8 Tammi 20201h 25min

#17 Classic episode - Will MacAskill on moral uncertainty, utilitarianism & how to avoid being a moral monster

#17 Classic episode - Will MacAskill on moral uncertainty, utilitarianism & how to avoid being a moral monster

Rebroadcast: this episode was originally released in January 2018. Immanuel Kant is a profoundly influential figure in modern philosophy, and was one of the earliest proponents for universal democracy...

31 Joulu 20191h 52min

#67 – David Chalmers on the nature and ethics of consciousness

#67 – David Chalmers on the nature and ethics of consciousness

What is it like to be you right now? You're seeing this text on the screen, smelling the coffee next to you, and feeling the warmth of the cup. There’s a lot going on in your head — your conscious exp...

16 Joulu 20194h 41min

#66 – Peter Singer on being provocative, effective altruism, & how his moral views have changed

#66 – Peter Singer on being provocative, effective altruism, & how his moral views have changed

In 1989, the professor of moral philosophy Peter Singer was all over the news for his inflammatory opinions about abortion. But the controversy stemmed from Practical Ethics — a book he’d actually rel...

5 Joulu 20192h 1min

#65 – Ambassador Bonnie Jenkins on 8 years pursuing WMD arms control, & diversity in diplomacy

#65 – Ambassador Bonnie Jenkins on 8 years pursuing WMD arms control, & diversity in diplomacy

"…it started when the Soviet Union fell apart and there was a real desire to ensure security of nuclear materials and pathogens, and that scientists with [WMD-related] knowledge could get paid so that...

19 Marras 20191h 40min

#64 – Bruce Schneier on how insecure electronic voting could break the United States — and surveillance without tyranny

#64 – Bruce Schneier on how insecure electronic voting could break the United States — and surveillance without tyranny

November 3 2020, 10:32PM: CNN, NBC, and FOX report that Donald Trump has narrowly won Florida, and with it, re-election.  November 3 2020, 11:46PM: The NY Times and Wall Street Journal report that so...

25 Loka 20192h 11min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
rss-narsisti
voi-hyvin-meditaatiot-2
psykopodiaa-podcast
adhd-podi
rss-rahamania
rss-niinku-asia-on
rss-valo-minussa-2
rss-vapaudu-voimaasi
psykologia
aamukahvilla
kesken
rss-koira-haudattuna
koulu-podcast-2
mielipaivakirja
rss-uskonto-on-tylsaa
rss-tietoinen-yhteys-podcast-2
ilona-rauhala
rss-duodecim-lehti
rss-opi-espanjaa