#81 - Ben Garfinkel on scrutinising classic AI risk arguments

#81 - Ben Garfinkel on scrutinising classic AI risk arguments

80,000 Hours, along with many other members of the effective altruism movement, has argued that helping to positively shape the development of artificial intelligence may be one of the best ways to have a lasting, positive impact on the long-term future. Millions of dollars in philanthropic spending, as well as lots of career changes, have been motivated by these arguments.

Today’s guest, Ben Garfinkel, Research Fellow at Oxford’s Future of Humanity Institute, supports the continued expansion of AI safety as a field and believes working on AI is among the very best ways to have a positive impact on the long-term future. But he also believes the classic AI risk arguments have been subject to insufficient scrutiny given this level of investment.

In particular, the case for working on AI if you care about the long-term future has often been made on the basis of concern about AI accidents; it’s actually quite difficult to design systems that you can feel confident will behave the way you want them to in all circumstances.

Nick Bostrom wrote the most fleshed out version of the argument in his book, Superintelligence. But Ben reminds us that, apart from Bostrom’s book and essays by Eliezer Yudkowsky, there's very little existing writing on existential accidents.

Links to learn more, summary and full transcript.

There have also been very few skeptical experts that have actually sat down and fully engaged with it, writing down point by point where they disagree or where they think the mistakes are. This means that Ben has probably scrutinised classic AI risk arguments as carefully as almost anyone else in the world.

He thinks that most of the arguments for existential accidents often rely on fuzzy, abstract concepts like optimisation power or general intelligence or goals, and toy thought experiments. And he doesn’t think it’s clear we should take these as a strong source of evidence.

Ben’s also concerned that these scenarios often involve massive jumps in the capabilities of a single system, but it's really not clear that we should expect such jumps or find them plausible. These toy examples also focus on the idea that because human preferences are so nuanced and so hard to state precisely, it should be quite difficult to get a machine that can understand how to obey them.

But Ben points out that it's also the case in machine learning that we can train lots of systems to engage in behaviours that are actually quite nuanced and that we can't specify precisely. If AI systems can recognise faces from images, and fly helicopters, why don’t we think they’ll be able to understand human preferences?

Despite these concerns, Ben is still fairly optimistic about the value of working on AI safety or governance.

He doesn’t think that there are any slam-dunks for improving the future, and so the fact that there are at least plausible pathways for impact by working on AI safety and AI governance, in addition to it still being a very neglected area, puts it head and shoulders above most areas you might choose to work in.

This is the second episode hosted by our Strategy Advisor Howie Lempel, and he and Ben cover, among many other things:

• The threat of AI systems increasing the risk of permanently damaging conflict or collapse
• The possibility of permanently locking in a positive or negative future
• Contenders for types of advanced systems
• What role AI should play in the effective altruism portfolio

Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Jaksot(320)

#60 - Phil Tetlock on why accurate forecasting matters for everything, and how you can do it better

#60 - Phil Tetlock on why accurate forecasting matters for everything, and how you can do it better

Have you ever been infuriated by a doctor's unwillingness to give you an honest, probabilistic estimate about what to expect? Or a lawyer who won't tell you the chances you'll win your case? Their beh...

28 Kesä 20192h 11min

#59 – Cass Sunstein on how change happens, and why it's so often abrupt & unpredictable

#59 – Cass Sunstein on how change happens, and why it's so often abrupt & unpredictable

It can often feel hopeless to be an activist seeking social change on an obscure issue where most people seem opposed or at best indifferent to you. But according to a new book by Professor Cass Sunst...

17 Kesä 20191h 43min

#58 – Pushmeet Kohli of DeepMind on designing robust & reliable AI systems and how to succeed in AI

#58 – Pushmeet Kohli of DeepMind on designing robust & reliable AI systems and how to succeed in AI

When you're building a bridge, responsibility for making sure it won't fall over isn't handed over to a few 'bridge not falling down engineers'. Making sure a bridge is safe to use and remains standin...

3 Kesä 20191h 30min

Rob Wiblin on human nature, new technology, and living a happy, healthy & ethical life

Rob Wiblin on human nature, new technology, and living a happy, healthy & ethical life

This is a cross-post of some interviews Rob did recently on two other podcasts — Mission Daily (from 2m) and The Good Life (from 1h13m). Some of the content will be familiar to regular listeners — bu...

13 Touko 20192h 18min

#57 – Tom Kalil on how to do the most good in government

#57 – Tom Kalil on how to do the most good in government

You’re 29 years old, and you’ve just been given a job in the White House. How do you quickly figure out how the US Executive Branch behemoth actually works, so that you can have as much impact as poss...

23 Huhti 20192h 50min

#56 - Persis Eskander on wild animal welfare and what, if anything, to do about it

#56 - Persis Eskander on wild animal welfare and what, if anything, to do about it

Elephants in chains at travelling circuses; pregnant pigs trapped in coffin sized crates at factory farms; deers living in the wild. We should welcome the last as a pleasant break from the horror, rig...

15 Huhti 20192h 57min

#55 – Lutter & Winter on founding charter cities with outstanding governance to end poverty

#55 – Lutter & Winter on founding charter cities with outstanding governance to end poverty

Governance matters. Policy change quickly took China from famine to fortune; Singapore from swamps to skyscrapers; and Hong Kong from fishing village to financial centre. Unfortunately, many governmen...

31 Maalis 20192h 31min

#54 – OpenAI on publication norms, malicious uses of AI, and general-purpose learning algorithms

#54 – OpenAI on publication norms, malicious uses of AI, and general-purpose learning algorithms

OpenAI’s Dactyl is an AI system that can manipulate objects with a human-like robot hand. OpenAI Five is an AI system that can defeat humans at the video game Dota 2. The strange thing is they were bo...

19 Maalis 20192h 53min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
rss-narsisti
voi-hyvin-meditaatiot-2
aamukahvilla
rss-niinku-asia-on
rss-liian-kuuma-peruna
adhd-podi
psykologia
kesken
rss-vapaudu-voimaasi
rss-valo-minussa-2
dear-ladies
rss-koira-haudattuna
jari-sarasvuo-podcast
esa-saarinen-filosofia-ja-systeemiajattelu
leveli
rss-duodecim-lehti
rss-luonnollinen-synnytys-podcast
rss-ihana-elamani