#80 – Stuart Russell on why our approach to AI is broken and how to fix it

#80 – Stuart Russell on why our approach to AI is broken and how to fix it

Stuart Russell, Professor at UC Berkeley and co-author of the most popular AI textbook, thinks the way we approach machine learning today is fundamentally flawed.

In his new book, Human Compatible, he outlines the 'standard model' of AI development, in which intelligence is measured as the ability to achieve some definite, completely-known objective that we've stated explicitly. This is so obvious it almost doesn't even seem like a design choice, but it is.

Unfortunately there's a big problem with this approach: it's incredibly hard to say exactly what you want. AI today lacks common sense, and simply does whatever we've asked it to. That's true even if the goal isn't what we really want, or the methods it's choosing are ones we would never accept.

We already see AIs misbehaving for this reason. Stuart points to the example of YouTube's recommender algorithm, which reportedly nudged users towards extreme political views because that made it easier to keep them on the site. This isn't something we wanted, but it helped achieve the algorithm's objective: maximise viewing time.

Like King Midas, who asked to be able to turn everything into gold but ended up unable to eat, we get too much of what we've asked for.

Links to learn more, summary and full transcript.

This 'alignment' problem will get more and more severe as machine learning is embedded in more and more places: recommending us news, operating power grids, deciding prison sentences, doing surgery, and fighting wars. If we're ever to hand over much of the economy to thinking machines, we can't count on ourselves correctly saying exactly what we want the AI to do every time.

Stuart isn't just dissatisfied with the current model though, he has a specific solution. According to him we need to redesign AI around 3 principles:

1. The AI system's objective is to achieve what humans want.
2. But the system isn't sure what we want.
3. And it figures out what we want by observing our behaviour.
Stuart thinks this design architecture, if implemented, would be a big step forward towards reliably beneficial AI.

For instance, a machine built on these principles would be happy to be turned off if that's what its owner thought was best, while one built on the standard model should resist being turned off because being deactivated prevents it from achieving its goal. As Stuart says, "you can't fetch the coffee if you're dead."

These principles lend themselves towards machines that are modest and cautious, and check in when they aren't confident they're truly achieving what we want.

We've made progress toward putting these principles into practice, but the remaining engineering problems are substantial. Among other things, the resulting AIs need to be able to interpret what people really mean to say based on the context of a situation. And they need to guess when we've rejected an option because we've considered it and decided it's a bad idea, and when we simply haven't thought about it at all.

Stuart thinks all of these problems are surmountable, if we put in the work. The harder problems may end up being social and political.

When each of us can have an AI of our own — one smarter than any person — how do we resolve conflicts between people and their AI agents? And if AIs end up doing most work that people do today, how can humans avoid becoming enfeebled, like lazy children tended to by machines, but not intellectually developed enough to know what they really want?

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:19:06)
  • Human Compatible: Artificial Intelligence and the Problem of Control (00:21:27)
  • Principles for Beneficial Machines (00:29:25)
  • AI moral rights (00:33:05)
  • Humble machines (00:39:35)
  • Learning to predict human preferences (00:45:55)
  • Animals and AI (00:49:33)
  • Enfeeblement problem (00:58:21)
  • Counterarguments (01:07:09)
  • Orthogonality thesis (01:24:25)
  • Intelligence explosion (01:29:15)
  • Policy ideas (01:38:39)
  • What most needs to be done (01:50:14)

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Jaksot(332)

#57 – Tom Kalil on how to do the most good in government

#57 – Tom Kalil on how to do the most good in government

You’re 29 years old, and you’ve just been given a job in the White House. How do you quickly figure out how the US Executive Branch behemoth actually works, so that you can have as much impact as poss...

23 Huhti 20192h 50min

#56 - Persis Eskander on wild animal welfare and what, if anything, to do about it

#56 - Persis Eskander on wild animal welfare and what, if anything, to do about it

Elephants in chains at travelling circuses; pregnant pigs trapped in coffin sized crates at factory farms; deers living in the wild. We should welcome the last as a pleasant break from the horror, rig...

15 Huhti 20192h 57min

#55 – Lutter & Winter on founding charter cities with outstanding governance to end poverty

#55 – Lutter & Winter on founding charter cities with outstanding governance to end poverty

Governance matters. Policy change quickly took China from famine to fortune; Singapore from swamps to skyscrapers; and Hong Kong from fishing village to financial centre. Unfortunately, many governmen...

31 Maalis 20192h 31min

#54 – OpenAI on publication norms, malicious uses of AI, and general-purpose learning algorithms

#54 – OpenAI on publication norms, malicious uses of AI, and general-purpose learning algorithms

OpenAI’s Dactyl is an AI system that can manipulate objects with a human-like robot hand. OpenAI Five is an AI system that can defeat humans at the video game Dota 2. The strange thing is they were bo...

19 Maalis 20192h 53min

#53 - Kelsey Piper on the room for important advocacy within journalism

#53 - Kelsey Piper on the room for important advocacy within journalism

“Politics. Business. Opinion. Science. Sports. Animal welfare. Existential risk.” Is this a plausible future lineup for major news outlets? Funded by the Rockefeller Foundation and given very little ...

27 Helmi 20192h 34min

Julia Galef and Rob Wiblin on an updated view of the best ways to help humanity

Julia Galef and Rob Wiblin on an updated view of the best ways to help humanity

This is a cross-post of an interview Rob did with Julia Galef on her podcast Rationally Speaking. Rob and Julia discuss how the career advice 80,000 Hours gives has changed over the years, and the big...

17 Helmi 201956min

#52 - Glen Weyl on uprooting capitalism and democracy for a just society

#52 - Glen Weyl on uprooting capitalism and democracy for a just society

Pro-market economists love to wax rhapsodic about the capacity of markets to pull together the valuable local information spread across all of society about what people want and how to make it. But wh...

8 Helmi 20192h 44min

#51 - Martin Gurri on the revolt of the public & crisis of authority in the information age

#51 - Martin Gurri on the revolt of the public & crisis of authority in the information age

Politics in rich countries seems to be going nuts. What's the explanation? Rising inequality? The decline of manufacturing jobs? Excessive immigration? Martin Gurri spent decades as a CIA analyst and...

29 Tammi 20192h 31min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
voi-hyvin-meditaatiot-2
adhd-podi
psykologia
rss-tietoinen-yhteys-podcast-2
rss-valo-minussa-2
rss-rahamania
rss-niinku-asia-on
kesken
rss-liian-kuuma-peruna
rss-arkea-ja-aurinkoa-podcast-espanjasta
rss-vapaudu-voimaasi
rahapuhetta
dear-ladies
rss-uskonto-on-tylsaa
rss-narsisti
rss-hereilla
koodikahvit
aamukahvilla