Podme logo
KotiLöydäKategoriatEtsiOpiskelijoille
#96 – Nina Schick on disinformation and the rise of synthetic media

#96 – Nina Schick on disinformation and the rise of synthetic media

02:00:042021-04-06

Jaksokuvaus

You might have heard fears like this in the last few years: What if Donald Trump was woken up in the middle of the night and shown a fake video — indistinguishable from a real one — in which Kim Jong Un announced an imminent nuclear strike on the U.S.?Today’s guest Nina Schick, author of Deepfakes: The Coming Infocalypse, thinks these concerns were the result of hysterical reporting, and that the barriers to entry in terms of making a very sophisticated ‘deepfake’ video today are a lot higher than people think. But she also says that by the end of the decade, YouTubers will be able to produce the kind of content that's currently only accessible to Hollywood studios. So is it just a matter of time until we’ll be right to be terrified of this stuff? Links to learn more, summary and full transcript. Nina thinks the problem of misinformation and disinformation might be roughly as important as climate change, because as she says: “Everything exists within this information ecosystem, it encompasses everything.” We haven’t done enough research to properly weigh in on that ourselves, but Rob did present Nina with some early objections, such as: • Won’t people quickly learn that audio and video can be faked, and so will only take them seriously if they come from a trusted source? • If photoshop didn’t lead to total chaos, why should this be any different? But the grim reality is that if you wrote “I believe that the world will end on April 6, 2022” and pasted it next to a photo of Albert Einstein — a lot of people would believe it was a genuine quote. And Nina thinks that flawless synthetic videos will represent a significant jump in our ability to deceive. She also points out that the direct impact of fake videos is just one side of the issue. In a world where all media can be faked, everything can be denied. Consider Trump’s infamous Access Hollywood tape. If that happened in 2020 instead of 2016, he would have almost certainly claimed it was fake — and that claim wouldn’t be obviously ridiculous. Malignant politicians everywhere could plausibly deny footage of them receiving a bribe, or ordering a massacre. What happens if in every criminal trial, a suspect caught on camera can just look at the jury and say “that video is fake”? Nina says that undeniably, this technology is going to give bad actors a lot of scope for not having accountability for their actions. As we try to inoculate people against being tricked by synthetic media, we risk corroding their trust in all authentic media too. And Nina asks: If you can't agree on any set of objective facts or norms on which to start your debate, how on earth do you even run a society? Nina and Rob also talk about a bunch of other topics, including: • The history of disinformation, and groups who sow disinformation professionally • How deepfake pornography is used to attack and silence women activitists • The key differences between how this technology interacts with liberal democracies vs. authoritarian regimes • Whether we should make it illegal to make a deepfake of someone without their permission • And the coolest positive uses of this technologyChapters:Rob’s intro (00:00:00)The interview begins (00:01:28)Deepfakes (00:05:49)The influence of synthetic media today (00:17:20)The history of misinformation and disinformation (00:28:13)Text vs. video (00:34:05)Privacy (00:40:17)Deepfake pornography (00:49:05)Russia and other bad actors (00:58:38)2016 vs. 2020 US elections (01:13:44)Authoritarian regimes vs. liberal democracies (01:24:08)Law reforms (01:31:52)Positive uses (01:37:04)Technical solutions (01:40:56)Careers (01:52:30)Rob’s outro (01:58:27)Producer: Keiran Harris.Audio mastering: Ben Cordell.Transcriptions: Sofia Davis-Fogel.

Uusimmat jaksot

80,000 Hours Podcast
80,000 Hours Podcast

#202 – Venki Ramakrishnan on the cutting edge of anti-ageing science

2024-09-192h 20min
80,000 Hours Podcast
80,000 Hours Podcast

#201 – Ken Goldberg on why your robot butler isn’t here yet

2024-09-132h 1min
80,000 Hours Podcast
80,000 Hours Podcast

#200 – Ezra Karger on what superforecasters and experts think about existential risks

2024-09-042h 49min
80,000 Hours Podcast
80,000 Hours Podcast

#199 – Nathan Calvin on California’s AI bill SB 1047 and its potential to shape US AI policy

2024-08-291h 12min
80,000 Hours Podcast
80,000 Hours Podcast

#198 – Meghan Barrett on challenging our assumptions about insects

2024-08-263h 48min
80,000 Hours Podcast
80,000 Hours Podcast

#197 – Nick Joseph on whether Anthropic's AI safety policy is up to the task

2024-08-222h 29min
80,000 Hours Podcast
80,000 Hours Podcast

#196 – Jonathan Birch on the edge cases of sentience and why they matter

2024-08-152h 1min
80,000 Hours Podcast
80,000 Hours Podcast

#195 – Sella Nevo on who's trying to steal frontier AI models, and what they could do with them

2024-08-012h 8min
80,000 Hours Podcast
80,000 Hours Podcast

#194 – Vitalik Buterin on defensive acceleration and how to regulate AI when you fear government

2024-07-263h 4min
80,000 Hours Podcast
80,000 Hours Podcast

#193 – Sihao Huang on the risk that US–China AI competition leads to war

2024-07-182h 23min
logo

PODME

TIEDOT

  • Evästekäytäntö
  • Käyttöehdot
  • Tietosuojakäytäntö
  • Medialle

LATAA SOVELLUKSEMME!

app storegoogle play store

ALUEELLA

flag
  • sweden_flag
  • norway_flag
  • finland_flag

© Podme AB 2024