#96 – Nina Schick on disinformation and the rise of synthetic media

#96 – Nina Schick on disinformation and the rise of synthetic media

You might have heard fears like this in the last few years: What if Donald Trump was woken up in the middle of the night and shown a fake video — indistinguishable from a real one — in which Kim Jong Un announced an imminent nuclear strike on the U.S.?

Today’s guest Nina Schick, author of Deepfakes: The Coming Infocalypse, thinks these concerns were the result of hysterical reporting, and that the barriers to entry in terms of making a very sophisticated ‘deepfake’ video today are a lot higher than people think.

But she also says that by the end of the decade, YouTubers will be able to produce the kind of content that's currently only accessible to Hollywood studios. So is it just a matter of time until we’ll be right to be terrified of this stuff?

Links to learn more, summary and full transcript.

Nina thinks the problem of misinformation and disinformation might be roughly as important as climate change, because as she says: “Everything exists within this information ecosystem, it encompasses everything.” We haven’t done enough research to properly weigh in on that ourselves, but Rob did present Nina with some early objections, such as:

• Won’t people quickly learn that audio and video can be faked, and so will only take them seriously if they come from a trusted source?
• If photoshop didn’t lead to total chaos, why should this be any different?

But the grim reality is that if you wrote “I believe that the world will end on April 6, 2022” and pasted it next to a photo of Albert Einstein — a lot of people would believe it was a genuine quote. And Nina thinks that flawless synthetic videos will represent a significant jump in our ability to deceive.

She also points out that the direct impact of fake videos is just one side of the issue. In a world where all media can be faked, everything can be denied.

Consider Trump’s infamous Access Hollywood tape. If that happened in 2020 instead of 2016, he would have almost certainly claimed it was fake — and that claim wouldn’t be obviously ridiculous. Malignant politicians everywhere could plausibly deny footage of them receiving a bribe, or ordering a massacre. What happens if in every criminal trial, a suspect caught on camera can just look at the jury and say “that video is fake”?

Nina says that undeniably, this technology is going to give bad actors a lot of scope for not having accountability for their actions.

As we try to inoculate people against being tricked by synthetic media, we risk corroding their trust in all authentic media too. And Nina asks: If you can't agree on any set of objective facts or norms on which to start your debate, how on earth do you even run a society?

Nina and Rob also talk about a bunch of other topics, including:

• The history of disinformation, and groups who sow disinformation professionally
• How deepfake pornography is used to attack and silence women activitists
• The key differences between how this technology interacts with liberal democracies vs. authoritarian regimes
• Whether we should make it illegal to make a deepfake of someone without their permission
• And the coolest positive uses of this technology

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:01:28)
  • Deepfakes (00:05:49)
  • The influence of synthetic media today (00:17:20)
  • The history of misinformation and disinformation (00:28:13)
  • Text vs. video (00:34:05)
  • Privacy (00:40:17)
  • Deepfake pornography (00:49:05)
  • Russia and other bad actors (00:58:38)
  • 2016 vs. 2020 US elections (01:13:44)
  • Authoritarian regimes vs. liberal democracies (01:24:08)
  • Law reforms (01:31:52)
  • Positive uses (01:37:04)
  • Technical solutions (01:40:56)
  • Careers (01:52:30)
  • Rob’s outro (01:58:27)


Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Sofia Davis-Fogel.

Avsnitt(333)

'95% of AI Pilots Fail': The hidden agenda behind the viral stat that misled millions

'95% of AI Pilots Fail': The hidden agenda behind the viral stat that misled millions

You might have heard that '95% of corporate AI pilots' are failing. It was one of the most widely cited AI statistics of 2025, parroted by media outlets everywhere. It helped trigger a Nasdaq selloff ...

28 Apr 10min

#242 – Will MacAskill on how we survive the 'intelligence explosion,' AI character, and the case for 'viatopia'

#242 – Will MacAskill on how we survive the 'intelligence explosion,' AI character, and the case for 'viatopia'

Hundreds of millions already turn to AI on the most personal of topics — therapy, political opinions, and how to treat others. And as AI takes over more of the economy, the character of these systems ...

22 Apr 3h 9min

Risks from power-seeking AI systems (article narration by Zershaaneh Qureshi)

Risks from power-seeking AI systems (article narration by Zershaaneh Qureshi)

Hundreds of prominent AI scientists and other notable figures signed a statement in 2023 saying that mitigating the risk of extinction from AI should be a global priority. At 80,000 Hours, we’ve consi...

16 Apr 1h 29min

How scary is Claude Mythos? 303 pages in 21 minutes

How scary is Claude Mythos? 303 pages in 21 minutes

With Claude Mythos we have an AI that knows when it's being tested, can obscure its reasoning when it wants, and is better at breaking into (and out of) computers than any human alive. Rob Wiblin work...

10 Apr 21min

Village gossip, pesticide bans, and gene drives: 17 experts on the future of global health

Village gossip, pesticide bans, and gene drives: 17 experts on the future of global health

What does it really take to lift millions out of poverty and prevent needless deaths?In this special compilation episode, 17 past guests — including economists, nonprofit founders, and policy advisors...

7 Apr 4h 6min

What everyone is missing about Anthropic vs the Pentagon. And: The Meta leaks are worse than you think.

What everyone is missing about Anthropic vs the Pentagon. And: The Meta leaks are worse than you think.

When the Pentagon tried to strong-arm Anthropic into dropping its ban on AI-only kill decisions and mass domestic surveillance, the company refused. Its critics went on the attack: Anthropic and its s...

3 Apr 20min

#241 – Richard Moulange on how now AI codes viable genomes from scratch and outperforms virologists at lab work — what could go wrong?

#241 – Richard Moulange on how now AI codes viable genomes from scratch and outperforms virologists at lab work — what could go wrong?

Last September, scientists used an AI model to design genomes for entirely new bacteriophages (viruses that infect bacteria). They then built them in a lab. Many were viable. And despite being entirel...

31 Mars 3h 7min

#240 – Samuel Charap on how a Ukraine ceasefire could accidentally set Europe up for a bigger war

#240 – Samuel Charap on how a Ukraine ceasefire could accidentally set Europe up for a bigger war

Many people believe a ceasefire in Ukraine will leave Europe safer. But today's guest lays out how a deal could potentially generate insidious new risks — leaving us in a situation that's equally dang...

24 Mars 1h 12min

Populärt inom Utbildning

rss-bara-en-till-om-missbruk-medberoende-2
historiepodden-se
det-skaver
harrisons-dramatiska-historia
nu-blir-det-historia
roda-vita-rosen
johannes-hansen-podcast
allt-du-velat-veta
rss-viktmedicinpodden
sektledare
i-vantan-pa-katastrofen
not-fanny-anymore
rss-foraldramotet-bring-lagercrantz
rss-max-tant-med-max-villman
rss-sjalsligt-avkladd
rikatillsammans-om-privatekonomi-rikedom-i-livet
sa-in-i-sjalen
rss-npf-podden
rss-basta-livet
rss-traningsklubben