Is there a case against Anthropic? And: The Meta leaks are worse than you think.

Is there a case against Anthropic? And: The Meta leaks are worse than you think.

When the Pentagon tried to strong-arm Anthropic into dropping its ban on AI-only kill decisions and mass domestic surveillance, the company refused. Its critics went on the attack: Anthropic and its supporters are some combination of 'hypocritical', 'naive', and 'anti-democratic'. Rob Wiblin dissects each claim finding that all three are mediocre arguments dressed up as hard truths. (Though the 'naive' one is at least interesting.)

Watch on YouTube: What Everyone is Missing about Anthropic vs The Pentagon

Plus, from 13:43: Leaked documents from Meta revealed that 10% of the company's total revenue — around $16 billion a year — came from ads for scams and goods Meta had itself banned. These likely enabled the theft of around $50 billion dollars a year from Americans alone. But when an internal anti-fraud team developed a screening method that halved the rate of scams coming from China... well, it wasn't well received.

Watch on YouTube: The Meta Leaks Are Worse Than You Think

Chapters:

  • Introduction (00:00:00)
  • What Everyone is Missing about Anthropic vs The Pentagon (00:00:26)
  • Charge 1: Hypocrisy (00:01:21)
  • Charge 2: Naivety (00:04:55)
  • Charge 3: Undemocratic (00:09:38)
  • You don't have to debate on their terms (00:12:32)
  • The Meta Leaks Are Worse Than You Think (00:13:43)
  • Three fixes for social media's scam problem (00:16:48)
  • We should regulate AI companies as strictly as banks (00:18:46)

Video and audio editing: Dominic Armstrong and Simon Monsour
Transcripts and web: Elizabeth Cox and Katy Moore

Episoder(328)

OpenAI: The nonprofit refuses to be killed (with Tyler Whitmer)

OpenAI: The nonprofit refuses to be killed (with Tyler Whitmer)

Last December, the OpenAI business put forward a plan to completely sideline its nonprofit board. But two state attorneys general have now blocked that effort and kept that board very much alive and k...

11 Nov 20251h 56min

#227 – Helen Toner on the geopolitics of AGI in China and the Middle East

#227 – Helen Toner on the geopolitics of AGI in China and the Middle East

With the US racing to develop AGI and superintelligence ahead of China, you might expect the two countries to be negotiating how they’ll deploy AI, including in the military, without coming to blows. ...

5 Nov 20252h 20min

#226 – Holden Karnofsky on unexploited opportunities to make AI safer — and all his AGI takes

#226 – Holden Karnofsky on unexploited opportunities to make AI safer — and all his AGI takes

For years, working on AI safety usually meant theorising about the ‘alignment problem’ or trying to convince other people to give a damn. If you could find any way to help, the work was frustrating an...

30 Okt 20254h 30min

#225 – Daniel Kokotajlo on what a hyperspeed robot economy might look like

#225 – Daniel Kokotajlo on what a hyperspeed robot economy might look like

When Daniel Kokotajlo talks to security experts at major AI labs, they tell him something chilling: “Of course we’re probably penetrated by the CCP already, and if they really wanted something, they c...

27 Okt 20252h 12min

#224 – There's a cheap and low-tech way to save humanity from any engineered disease | Andrew Snyder-Beattie

#224 – There's a cheap and low-tech way to save humanity from any engineered disease | Andrew Snyder-Beattie

Conventional wisdom is that safeguarding humanity from the worst biological risks — microbes optimised to kill as many as possible — is difficult bordering on impossible, making bioweapons humanity’s ...

2 Okt 20252h 31min

Inside the Biden admin’s AI policy approach | Jake Sullivan, Biden’s NSA | via The Cognitive Revolution

Inside the Biden admin’s AI policy approach | Jake Sullivan, Biden’s NSA | via The Cognitive Revolution

Jake Sullivan was the US National Security Advisor from 2021-2025. He joined our friends on The Cognitive Revolution podcast in August to discuss AI as a critical national security issue. We thought i...

26 Sep 20251h 5min

#223 – Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

#223 – Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

At 26, Neel Nanda leads an AI safety team at Google DeepMind, has published dozens of influential papers, and mentored 50 junior researchers — seven of whom now work at major AI companies. His secret?...

15 Sep 20251h 46min

#222 – Can we tell if an AI is loyal by reading its mind? DeepMind's Neel Nanda (part 1)

#222 – Can we tell if an AI is loyal by reading its mind? DeepMind's Neel Nanda (part 1)

We don’t know how AIs think or why they do what they do. Or at least, we don’t know much. That fact is only becoming more troubling as AIs grow more capable and appear on track to wield enormous cultu...

8 Sep 20253h 1min

Populært innen Fakta

mikkels-paskenotter
fastlegen
dine-penger-pengeradet
relasjonspodden-med-dora-thorhallsdottir-kjersti-idem
foreldreradet
treningspodden
rss-strid-de-norske-borgerkrigene
jakt-og-fiskepodden
sinnsyn
rss-kunsten-a-leve
hverdagspsyken
fryktlos
rss-kull
rss-bisarr-historie
gravid-uke-for-uke
hagespiren-podcast
rss-var-forste-kaffe
rss-sunn-okonomi
lederskap-nhhs-podkast-om-ledelse
takk-og-lov-med-anine-kierulf