#209 – Rose Chan Loui on OpenAI’s gambit to ditch its nonprofit
80,000 Hours Podcast27 Marras 2024

#209 – Rose Chan Loui on OpenAI’s gambit to ditch its nonprofit

One OpenAI critic calls it “the theft of at least the millennium and quite possibly all of human history.” Are they right?

Back in 2015 OpenAI was but a humble nonprofit. That nonprofit started a for-profit, OpenAI LLC, but made sure to retain ownership and control. But that for-profit, having become a tech giant with vast staffing and investment, has grown tired of its shackles and wants to change the deal.

Facing off against it stand eight out-gunned and out-numbered part-time volunteers. Can they hope to defend the nonprofit’s interests against the overwhelming profit motives arrayed against them?

That’s the question host Rob Wiblin puts to nonprofit legal expert Rose Chan Loui of UCLA, who concludes that with a “heroic effort” and a little help from some friendly state attorneys general, they might just stand a chance.

Links to learn more, highlights, video, and full transcript.

As Rose lays out, on paper OpenAI is controlled by a nonprofit board that:

  • Can fire the CEO.
  • Would receive all the profits after the point OpenAI makes 100x returns on investment.
  • Is legally bound to do whatever it can to pursue its charitable purpose: “to build artificial general intelligence that benefits humanity.”

But that control is a problem for OpenAI the for-profit and its CEO Sam Altman — all the more so after the board concluded back in November 2023 that it couldn’t trust Altman and attempted to fire him (although those board members were ultimately ousted themselves after failing to adequately explain their rationale).

Nonprofit control makes it harder to attract investors, who don’t want a board stepping in just because they think what the company is doing is bad for humanity. And OpenAI the business is thirsty for as many investors as possible, because it wants to beat competitors and train the first truly general AI — able to do every job humans currently do — which is expected to cost hundreds of billions of dollars.

So, Rose explains, they plan to buy the nonprofit out. In exchange for giving up its windfall profits and the ability to fire the CEO or direct the company’s actions, the board will become minority shareholders with reduced voting rights, and presumably transform into a normal grantmaking foundation instead.

Is this a massive bait-and-switch? A case of the tail not only wagging the dog, but grabbing a scalpel and neutering it?

OpenAI repeatedly committed to California, Delaware, the US federal government, founding staff, and the general public that its resources would be used for its charitable mission and it could be trusted because of nonprofit control. Meanwhile, the divergence in interests couldn’t be more stark: every dollar the for-profit keeps from its nonprofit parent is another dollar it could invest in AGI and ultimately return to investors and staff.

Chapters:

  • Cold open (00:00:00)
  • What's coming up (00:00:50)
  • Who is Rose Chan Loui? (00:03:11)
  • How OpenAI carefully chose a complex nonprofit structure (00:04:17)
  • OpenAI's new plan to become a for-profit (00:11:47)
  • The nonprofit board is out-resourced and in a tough spot (00:14:38)
  • Who could be cheated in a bad conversion to a for-profit? (00:17:11)
  • Is this a unique case? (00:27:24)
  • Is control of OpenAI 'priceless' to the nonprofit in pursuit of its mission? (00:28:58)
  • The crazy difficulty of valuing the profits OpenAI might make (00:35:21)
  • Control of OpenAI is independently incredibly valuable and requires compensation (00:41:22)
  • It's very important the nonprofit get cash and not just equity (and few are talking about it) (00:51:37)
  • Is it a farce to call this an "arm's-length transaction"? (01:03:50)
  • How the nonprofit board can best play their hand (01:09:04)
  • Who can mount a court challenge and how that would work (01:15:41)
  • Rob's outro (01:21:25)

Producer: Keiran Harris
Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Video editing: Simon Monsour
Transcriptions: Katy Moore

Jaksot(316)

#145 Classic episode – Christopher Brown on why slavery abolition wasn't inevitable

#145 Classic episode – Christopher Brown on why slavery abolition wasn't inevitable

In many ways, humanity seems to have become more humane and inclusive over time. While there’s still a lot of progress to be made, campaigns to give people of different genders, races, sexualities, ethnicities, beliefs, and abilities equal treatment and rights have had significant success.It’s tempting to believe this was inevitable — that the arc of history “bends toward justice,” and that as humans get richer, we’ll make even more moral progress.But today's guest Christopher Brown — a professor of history at Columbia University and specialist in the abolitionist movement and the British Empire during the 18th and 19th centuries — believes the story of how slavery became unacceptable suggests moral progress is far from inevitable.Rebroadcast: This episode was originally aired in February 2023.Links to learn more, video, and full transcript: https://80k.link/CLBWhile most of us today feel that the abolition of slavery was sure to happen sooner or later as humans became richer and more educated, Christopher doesn't believe any of the arguments for that conclusion pass muster. If he's right, a counterfactual history where slavery remains widespread in 2023 isn't so far-fetched.As Christopher lays out in his two key books, Moral Capital: Foundations of British Abolitionism and Arming Slaves: From Classical Times to the Modern Age, slavery has been ubiquitous throughout history. Slavery of some form was fundamental in Classical Greece, the Roman Empire, in much of the Islamic civilisation, in South Asia, and in parts of early modern East Asia, Korea, China.It was justified on all sorts of grounds that sound mad to us today. But according to Christopher, while there’s evidence that slavery was questioned in many of these civilisations, and periodically attacked by slaves themselves, there was no enduring or successful moral advocacy against slavery until the British abolitionist movement of the 1700s.That movement first conquered Britain and its empire, then eventually the whole world. But the fact that there's only a single time in history that a persistent effort to ban slavery got off the ground is a big clue that opposition to slavery was a contingent matter: if abolition had been inevitable, we’d expect to see multiple independent abolitionist movements thoroughly history, providing redundancy should any one of them fail.Christopher argues that this rarity is primarily down to the enormous economic and cultural incentives to deny the moral repugnancy of slavery, and crush opposition to it with violence wherever necessary.Mere awareness is insufficient to guarantee a movement will arise to fix a problem. Humanity continues to allow many severe injustices to persist, despite being aware of them. So why is it so hard to imagine we might have done the same with forced labour?In this episode, Christopher describes the unique and peculiar set of political, social and religious circumstances that gave rise to the only successful and lasting anti-slavery movement in human history. These circumstances were sufficiently improbable that Christopher believes there are very nearby worlds where abolitionism might never have taken off.Christopher and host Rob Wiblin also discuss:Various instantiations of slavery throughout human historySigns of antislavery sentiment before the 17th centuryThe role of the Quakers in early British abolitionist movementThe importance of individual “heroes” in the abolitionist movementArguments against the idea that the abolition of slavery was contingentWhether there have ever been any major moral shifts that were inevitableChapters:Rob's intro (00:00:00)Cold open (00:01:45)Who's Christopher Brown? (00:03:00)Was abolitionism inevitable? (00:08:53)The history of slavery (00:14:35)Signs of antislavery sentiment before the 17th century (00:19:24)Quakers (00:32:37)Attitudes to slavery in other religions (00:44:37)Quaker advocacy (00:56:28)Inevitability and contingency (01:06:29)Moral revolution (01:16:39)The importance of specific individuals (01:29:23)Later stages of the antislavery movement (01:41:33)Economic theory of abolition (01:55:27)Influence of knowledge work and education (02:12:15)Moral foundations theory (02:20:43)Figuring out how contingent events are (02:32:42)Least bad argument for why abolition was inevitable (02:41:45)Were any major moral shifts inevitable? (02:47:29)Producer: Keiran HarrisAudio mastering: Milo McGuireTranscriptions: Katy Moore

20 Tammi 2h 56min

How to Prevent a Mirror Life Catastrophe | James Smith (Director, Mirror Biology Dialogues Fund)

How to Prevent a Mirror Life Catastrophe | James Smith (Director, Mirror Biology Dialogues Fund)

When James Smith first heard about mirror bacteria, he was sceptical. But within two weeks, he’d dropped everything to work on it full time, considering it the worst biothreat that he’d seen described. What convinced him?Mirror bacteria would be constructed entirely from molecules that are the mirror images of their naturally occurring counterparts. This seemingly trivial difference creates a fundamental break in the tree of life. For billions of years, the mechanisms underlying immune systems and keeping natural populations of microorganisms in check have evolved to recognise threats by their molecular shape — like a hand fitting into a matching glove.Learn more, video, and full transcript: https://80k.info/js26Mirror bacteria would upend that assumption, creating two enormous problems:Many critical immune pathways would likely fail to activate, creating risks of fatal infection across many species.Mirror bacteria could have substantial resistance to natural predators: for example, they would be essentially immune to the viruses that currently keep bacteria populations in check. That could help them spread and become irreversibly entrenched across diverse ecosystems.Unlike ordinary pathogens, which are typically species-specific, mirror bacteria’s reversed molecular structure means they could potentially infect humans, livestock, wildlife, and plants simultaneously. The same fundamental problem — reversed molecular structure breaking immune recognition — could affect most immune systems across the tree of life. People, animals, and plants could be infected from any contaminated soil, dust, or species.The discovery of these risks came as a surprise. The December 2024 Science paper that brought international attention to mirror life was coauthored by 38 leading scientists, including two Nobel Prize winners and several who had previously wanted to create mirror organisms.James is now the director of the Mirror Biology Dialogues Fund, which supports conversations among scientists and other experts about how these risks might be addressed. Scientists tracking the field think that mirror bacteria might be feasible in 10–30 years, or possibly sooner. But scientists have already created substantial components of the cellular machinery needed for mirror life. We can regulate precursor technologies to mirror life before they become technically feasible — but only if we act before the research crosses critical thresholds. Once certain capabilities exist, we can’t undo that knowledge.Addressing these risks could actually be very tractable: unlike other technologies where massive potential benefits accompany catastrophic risks, mirror life appears to offer minimal advantages beyond academic interest.Nonetheless, James notes that fewer than 10 people currently work full-time on mirror life risks and governance. This is an extraordinary opportunity for researchers in biosecurity, synthetic biology, immunology, policy, and many other fields to help solve an entirely preventable catastrophe — James even believes the issue is on par with AI safety as a priority for some people, depending on their skill set.The Mirror Biology Dialogues Fund is hiring!Deputy director: https://80k.info/mbdfddOperations lead: https://80k.info/mbdfopsExpression of interest for other roles: https://80k.info/mbdfeoiThis episode was recorded on November 5-6, 2025.Chapters:Cold open (00:00:00)Who's James Smith? (00:00:49)Why is mirror life so dangerous? (00:01:12)Mirror life and the human immune system (00:15:40)Nonhuman animals will also be at risk (00:28:25)Will plants be susceptible to mirror bacteria? (00:34:57)Mirror bacteria's effect on ecosystems (00:39:34)How close are we to making mirror bacteria? (00:52:16)Policies for governing mirror life research (01:06:39)Countermeasures if mirror bacteria are released into the world (01:22:06)Why hasn't mirror life evolved on its own? (01:28:37)Why wouldn't antibodies or antibiotics save us from mirror bacteria? (01:31:52)Will the environment be toxic to mirror life? (01:39:21)Are there too many uncertainties to act now? (01:44:18)The potential benefits of mirror molecules and mirror life (01:46:55)Might we encounter mirror life in space? (01:52:44)Sounding the alarms about mirror life: the backstory (01:54:55)How to get involved (02:02:44)Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon MonsourMusic: CORBITCamera operators: Jeremy Chevillotte and Alex MilesCoordination, transcripts, and web: Katy Moore

13 Tammi 2h 9min

#144 Classic episode – Athena Aktipis on why cancer is a fundamental universal phenomena

#144 Classic episode – Athena Aktipis on why cancer is a fundamental universal phenomena

What’s the opposite of cancer? If you answered “cure,” “antidote,” or “antivenom” — you’ve obviously been reading the antonym section at www.merriam-webster.com/thesaurus/cancer.But today’s guest Athena Aktipis says that the opposite of cancer is us: it's having a functional multicellular body that’s cooperating effectively in order to make that multicellular body function.If, like us, you found her answer far more satisfying than the dictionary, maybe you could consider closing your dozens of merriam-webster.com tabs, and start listening to this podcast instead.Rebroadcast: this episode was originally released in January 2023.Links to learn more, video, and full transcript: https://80k.link/AA As Athena explains in her book The Cheating Cell, what we see with cancer is a breakdown in each of the foundations of cooperation that allowed multicellularity to arise: Cells will proliferate when they shouldn't. Cells won't die when they should. Cells won't engage in the kind of division of labour that they should. Cells won’t do the jobs that they're supposed to do. Cells will monopolise resources. And cells will trash the environment.When we think about animals in the wild, or even bacteria living inside our cells, we understand that they're facing evolutionary pressures to figure out how they can replicate more; how they can get more resources; and how they can avoid predators — like lions, or antibiotics.We don’t normally think of individual cells as acting as if they have their own interests like this. But cancer cells are actually facing similar kinds of evolutionary pressures within our bodies, with one major difference: they replicate much, much faster.Incredibly, the opportunity for evolution by natural selection to operate just over the course of cancer progression is easily faster than all of the evolutionary time that we have had as humans since Homo sapiens came about.Here’s a quote from Athena:“So you have to shift your thinking to be like: the body is a world with all these different ecosystems in it, and the cells are existing on a time scale where, if we're going to map it onto anything like what we experience, a day is at least 10 years for them, right? So it's a very, very different way of thinking.”You can find compelling examples of cooperation and conflict all over the universe, so Rob and Athena don’t stop with cancer. They also discuss:Cheating within cells themselvesCooperation in human societies as they exist today — and perhaps in the future, between civilisations spread across different planets or starsWhether it’s too out-there to think of humans as engaging in cancerous behaviourWhy elephants get deadly cancers less often than humans, despite having way more cellsWhen a cell should commit suicideThe strategy of deliberately not treating cancer aggressivelySuperhuman cooperationAnd at the end of the episode, they cover Athena’s new book Everything is Fine! How to Thrive in the Apocalypse, including:Staying happy while thinking about the apocalypsePractical steps to prepare for the apocalypseAnd whether a zombie apocalypse is already happening among Tasmanian devilsChapters:Rob's intro (00:00:00)The interview begins (00:02:22)Cooperation (00:06:12)Cancer (00:09:52)How multicellular life survives (00:20:10)Why our anti-contagious-cancer mechanisms are so successful (00:32:34)Why elephants get deadly cancers less often than humans (00:48:50)Life extension (01:02:00)Honour among cancer thieves (01:06:21)When a cell should commit suicide (01:14:00)When the human body deliberately produces tumours (01:19:58)Surprising approaches for managing cancer (01:25:47)Analogies to human cooperation (01:39:32)Applying the "not treating cancer aggressively" strategy to real life (01:55:29)Humanity on Earth, and Earth in the universe (02:01:53)Superhuman cooperation (02:08:51)Cheating within cells (02:15:17)Father's genes vs. mother's genes (02:26:18)Everything is Fine: How to Thrive in the Apocalypse (02:40:13)Do we really live in an era of unusual risk? (02:54:53)Staying happy while thinking about the apocalypse (02:58:50)Overrated worries about the apocalypse (03:13:11)The zombie apocalypse (03:22:35)Producer: Keiran HarrisAudio mastering: Milo McGuireTranscriptions: Katy Moore

9 Tammi 3h 30min

#142 Classic episode – John McWhorter on why the optimal number of languages might be one, and other provocative claims about language

#142 Classic episode – John McWhorter on why the optimal number of languages might be one, and other provocative claims about language

John McWhorter is a linguistics professor at Columbia University specialising in research on creole languages. He's also a content-producing machine, never afraid to give his frank opinion on anything and everything. On top of his academic work, he's written 22 books, produced five online university courses, hosts one and a half podcasts, and now writes a regular New York Times op-ed column.Rebroadcast: this episode was originally released in December 2022.YouTube video version: https://youtu.be/MEd7TT_nMJELinks to learn more, video, and full transcript: https://80k.link/JMWe ask him what we think are the most important things everyone ought to know about linguistics, including:Can you communicate faster in some languages than others, or is there some constraint that prevents that?Does learning a second or third language make you smarter or not?Can a language decay and get worse at communicating what people want to say?If children aren't taught a language, how many generations does it take them to invent a fully fledged one of their own?Did Shakespeare write in a foreign language, and if so, should we translate his plays?How much does language really shape the way we think?Are creoles the best languages in the world — languages that ideally we would all speak?What would be the optimal number of languages globally?Does trying to save dying languages do their speakers a favour, or is it more of an imposition?Should we bother to teach foreign languages in UK and US schools?Is it possible to save the important cultural aspects embedded in a dying language without saving the language itself?Will AI models speak a language of their own in the future, one that humans can't understand but which better serves the tradeoffs AI models need to make?We’ve also added John’s talk “Why the World Looks the Same in Any Language” to the end of this episode. So stick around after the credits!Chapters:Rob's intro (00:00:00)Who's John McWhorter? (00:05:02)Does learning another language make you smarter? (00:05:54)Updating Shakespeare (00:07:52)Should we bother teaching foreign languages in school? (00:12:09)Language loss (00:16:05)The optimal number of languages for humanity (00:27:57)Do we reason about the world using language and words? (00:31:22)Can we communicate meaningful information more quickly in some languages? (00:35:04)Creole languages (00:38:48)AI and the future of language (00:50:45)Should we keep ums and ahs in The 80,000 Hours Podcast? (00:59:10)Why the World Looks the Same in Any Language (01:02:07)Producer: Keiran HarrisAudio mastering: Ben Cordell and Simon MonsourVideo editing: Ryan Kessler and Simon MonsourTranscriptions: Katy Moore

6 Tammi 1h 35min

2025 Highlight-o-thon: Oops! All Bests

2025 Highlight-o-thon: Oops! All Bests

It’s that magical time of year once again — highlightapalooza! Stick around for one top bit from each episode we recorded this year, including:Kyle Fish explaining how Anthropic’s AI Claude descends into spiritual woo when left to talk to itselfIan Dunt on why the unelected House of Lords is by far the best part of the British governmentSam Bowman’s strategy to get NIMBYs to love it when things get built next to their housesBuck Shlegeris on how to get an AI model that wants to seize control to accidentally help you foil its plans…as well as 18 other top observations and arguments from the past year of the show.Links to learn more, video, and full transcript: https://80k.info/best25It's been another year of living through history, whether we asked for it or not. Luisa and Rob will be back in 2026 to help you make sense of whatever comes next — as Earth continues its indifferent journey through the cosmos, now accompanied by AI systems that can summarise our meetings and generate adequate birthday messages for colleagues we barely know.Chapters:Cold open (00:00:00)Rob's intro (00:02:35)Helen Toner on whether we're racing China to build AGI (00:03:43)Hugh White on what he'd say to Americans (00:06:09)Buck Shlegeris on convincing AI models they've already escaped (00:12:09)Paul Scharre on a personal experience in Afghanistan that influenced his views on autonomous weapons (00:15:10)Ian Dunt on how unelected septuagenarians are the heroes of UK governance (00:19:06)Beth Barnes on AI companies being locally reasonable, but globally reckless (00:24:27)Tyler Whitmer on one thing the California and Delaware attorneys general forced on the OpenAI for-profit as part of their restructure (00:28:02)Toby Ord on whether rich people will get access to AGI first (00:30:13)Andrew Snyder-Beattie on how the worst biorisks are defence dominant (00:34:24)Eileen Yam on the most eye-watering gaps in opinions about AI between experts and the US public (00:39:41)Will MacAskill on what a century of history crammed into a decade might feel like (00:44:07)Kyle Fish on what happens when two instances of Claude are left to interact with each other (00:49:08)Sam Bowman on where the Not In My Back Yard movement actually has a point (00:56:29)Neel Nanda on how mechanistic interpretability is trying to be the biology of AI (01:03:12)Tom Davidson on the potential to install secret AI loyalties at a very early stage (01:07:19)Luisa and Rob discussing how medicine doesn't take the health burden of pregnancy seriously enough (01:10:53)Marius Hobbhahn on why scheming is a very natural path for AI models — and people (01:16:23)Holden Karnofsky on lessons for AI regulation drawn from successful farm animal welfare advocacy (01:21:29)Allan Dafoe on how AGI is an inescapable idea but one we have to define well (01:26:19)Ryan Greenblatt on the most likely ways for AI to take over (01:29:35)Updates Daniel Kokotajlo has made to his forecasts since writing and publishing the AI 2027 scenario (01:32:47)Dean Ball on why regulation invites path dependency, and that's a major problem (01:37:21)Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon MonsourMusic: CORBITCoordination, transcripts, and web: Katy Moore

29 Joulu 20251h 40min

#232 – Andreas Mogensen on what we owe 'philosophical Vulcans' and unconscious beings

#232 – Andreas Mogensen on what we owe 'philosophical Vulcans' and unconscious beings

Most debates about the moral status of AI systems circle the same question: is there something that it feels like to be them? But what if that’s the wrong question to ask? Andreas Mogensen — a senior researcher in moral philosophy at the University of Oxford — argues that so-called 'phenomenal consciousness' might be neither necessary nor sufficient for a being to deserve moral consideration. Links to learn more and full transcript: https://80k.info/am25For instance, a creature on the sea floor that experiences nothing but faint brightness from the sun might have no moral claim on us, despite being conscious. Meanwhile, any being with real desires that can be fulfilled or not fulfilled can arguably be benefited or harmed. Such beings arguably have a capacity for welfare, which means they might matter morally. And, Andreas argues, desire may not require subjective experience. Desire may need to be backed by positive or negative emotions — but as Andreas explains, there are some reasons to think a being could also have emotions without being conscious. There’s another underexplored route to moral patienthood: autonomy. If a being can rationally reflect on its goals and direct its own existence, we might have a moral duty to avoid interfering with its choices — even if it has no capacity for welfare. However, Andreas suspects genuine autonomy might require consciousness after all. To be a rational agent, your beliefs probably need to be justified by something, and conscious experience might be what does the justifying. But even this isn’t clear. The upshot? There’s a chance we could just be really mistaken about what it would take for an AI to matter morally. And with AI systems potentially proliferating at massive scale, getting this wrong could be among the largest moral errors in history.In today’s interview, Andreas and host Zershaaneh Qureshi confront all these confusing ideas, challenging their intuitions about consciousness, welfare, and morality along the way. They also grapple with a few seemingly attractive arguments which share a very unsettling conclusion: that human extinction (or even the extinction of all sentient life) could actually be a morally desirable thing. This episode was recorded on December 3, 2025.Chapters:Cold open (00:00:00)Introducing Zershaaneh (00:00:55)The puzzle of moral patienthood (00:03:20)Is subjective experience necessary? (00:05:52)What is it to desire? (00:10:42)Desiring without experiencing (00:17:56)What would make AIs moral patients? (00:28:17)Another route entirely: deserving autonomy (00:45:12)Maybe there's no objective truth about any of this (01:12:06)Practical implications (01:29:21)Why not just let superintelligence figure this out for us? (01:38:07)How could human extinction be a good thing? (01:47:30)Lexical threshold negative utilitarianism (02:12:30)So... should we still try to prevent extinction? (02:25:22)What are the most important questions for people to address here? (02:32:16)Is God GDPR compliant? (02:35:32)Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon MonsourCoordination, transcripts, and web: Katy Moore

19 Joulu 20252h 37min

#231 – Paul Scharre on how AI-controlled robots will and won't change war

#231 – Paul Scharre on how AI-controlled robots will and won't change war

In 1983, Stanislav Petrov, a Soviet lieutenant colonel, sat in a bunker watching a red screen flash “MISSILE LAUNCH.” Protocol demanded he report it to superiors, which would very likely trigger a retaliatory nuclear strike. Petrov didn’t. He reasoned that if the US were actually attacking, they wouldn’t fire just 5 missiles — they’d empty the silos. He bet the fate of the world on a hunch that his machine was broken. He was right.Paul Scharre, the former Army Ranger who led the Pentagon team that wrote the US military’s first policy on autonomous weapons, has a question: What would an AI have done in Petrov’s shoes? Would an AI system have been flexible and wise enough to make the same judgement? Or would it immediately launch a counterattack?Paul joins host Luisa Rodriguez to explain why we are hurtling toward a “battlefield singularity” — a tipping point where AI increasingly replaces humans in much of the military, changing the way war is fought with speed and complexity that outpaces humans’ ability to keep up.Links to learn more, video, and full transcript: https://80k.info/psMilitaries don’t necessarily want to take humans out of the loop. But Paul argues that the competitive pressure of warfare creates a “use it or lose it” dynamic. As former Deputy Secretary of Defense Bob Work put it: “If our competitors go to Terminators, and their decisions are bad, but they’re faster, how would we respond?”Once that line is crossed, Paul warns we might enter an era of “flash wars” — conflicts that spiral out of control as quickly and inexplicably as a flash crash in the stock market, with no way for humans to call a timeout.In this episode, Paul and Luisa dissect what this future looks like:Swarming warfare: Why the future isn’t just better drones, but thousands of cheap, autonomous agents coordinating like a hive mind to overwhelm defences.The Gatling gun cautionary tale: The inventor of the Gatling gun thought automating fire would reduce the number of soldiers needed, saving lives. Instead, it made war significantly deadlier. Paul argues AI automation could do the same, increasing lethality rather than creating “bloodless” robot wars.The cyber frontier: While robots have physical limits, Paul argues cyberwarfare is already at the point where AI can act faster than human defenders, leading to intelligent malware that evolves and adapts like a biological virus.The US-China “adoption race”: Paul rejects the idea that the US and China are in a spending arms race (AI is barely 1% of the DoD budget). Instead, it’s a race of organisational adoption — one where the US has massive advantages in talent and chips, but struggles with bureaucratic inertia that might not be a problem for an autocratic country.Paul also shares a personal story from his time as a sniper in Afghanistan — watching a potential target through his scope — that fundamentally shaped his view on why human judgement, with all its flaws, is the only thing keeping war from losing its humanity entirely.This episode was recorded on October 23-24, 2025.Chapters:Cold open (00:00:00)Who’s Paul Scharre? (00:00:46)How will AI and automation transform the nature of war? (00:01:17)Why would militaries take humans out of the loop? (00:12:22)AI in nuclear command, control, and communications (00:18:50)Nuclear stability and deterrence (00:36:10)What to expect over the next few decades (00:46:21)Financial and human costs of future “hyperwar” scenarios (00:50:42)AI warfare and the balance of power (01:06:37)Barriers to getting to automated war (01:11:08)Failure modes of autonomous weapons systems (01:16:28)Could autonomous weapons systems actually make us safer? (01:29:36)Is Paul overall optimistic or pessimistic about increasing automation in the military? (01:35:23)Paul’s takes on AGI’s transformative potential and whether natsec people buy it (01:37:42)Cyberwarfare (01:46:55)US-China balance of power and surveillance with AI (02:02:49)Policy and governance that could make us safer (02:29:11)How Paul’s experience in the Army informed his feelings on military automation (02:41:09)Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon MonsourMusic: CORBITCoordination, transcripts, and web: Katy Moore

17 Joulu 20252h 45min

AI might let a few people control everything — permanently (article by Rose Hadshar)

AI might let a few people control everything — permanently (article by Rose Hadshar)

Power is already concentrated today: over 800 million people live on less than $3 a day, the three richest men in the world are worth over $1 trillion, and almost six billion people live in countries without free and fair elections.This is a problem in its own right. There is still substantial distribution of power though: global income inequality is falling, over two billion people live in electoral democracies, no country earns more than a quarter of GDP, and no company earns as much as 1%.But in the future, advanced AI could enable much more extreme power concentration than we’ve seen so far.Many believe that within the next decade the leading AI projects will be able to run millions of superintelligent AI systems thinking many times faster than humans. These systems could displace human workers, leading to much less economic and political power for the vast majority of people; and unless we take action to prevent it, they may end up being controlled by a tiny number of people, with no effective oversight. Once these systems are deployed across the economy, government, and the military, whatever goals they’re built to have will become the primary force shaping the future. If those goals are chosen by the few, then a small number of people could end up with the power to make all of the important decisions about the future.This article by Rose Hadshar explores this emerging challenge in detail. You can see all the images and footnotes in the original article on the 80,000 Hours website.Chapters:Introduction (00:00)Summary (02:15)Section 1: Why might AI-enabled power concentration be a pressing problem? (07:02)Section 2: What are the top arguments against working on this problem? (45:02)Section 3: What can you do to help? (56:36)Narrated by: Dominic ArmstrongAudio engineering: Dominic Armstrong and Milo McGuireMusic: CORBIT

12 Joulu 20251h

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
voi-hyvin-meditaatiot-2
psykopodiaa-podcast
psykologia
adhd-podi
rss-duodecim-lehti
rss-valo-minussa-2
rss-niinku-asia-on
rss-vapaudu-voimaasi
kesken
jari-sarasvuo-podcast
rss-ai-mita-siskopodcast
aamukahvilla
rss-luonnollinen-synnytys-podcast
rss-narsisti
rahapuhetta
rss-koira-haudattuna
rss-arkea-ja-aurinkoa-podcast-espanjasta
rss-rouva-keto
ensihoidon-ja-pelastustyoncast