#132 Classic episode – Nova DasSarma on why information security may be critical to the safe development of AI systems

#132 Classic episode – Nova DasSarma on why information security may be critical to the safe development of AI systems

If a business has spent $100 million developing a product, it’s a fair bet that they don’t want it stolen in two seconds and uploaded to the web where anyone can use it for free.

This problem exists in extreme form for AI companies. These days, the electricity and equipment required to train cutting-edge machine learning models that generate uncanny human text and images can cost tens or hundreds of millions of dollars. But once trained, such models may be only a few gigabytes in size and run just fine on ordinary laptops.

Today’s guest, the computer scientist and polymath Nova DasSarma, works on computer and information security for the AI company Anthropic with the security team. One of her jobs is to stop hackers exfiltrating Anthropic’s incredibly expensive intellectual property, as recently happened to Nvidia.

Rebroadcast: this episode was originally released in June 2022.

Links to learn more, highlights, and full transcript.

As she explains, given models’ small size, the need to store such models on internet-connected servers, and the poor state of computer security in general, this is a serious challenge.

The worries aren’t purely commercial though. This problem looms especially large for the growing number of people who expect that in coming decades we’ll develop so-called artificial ‘general’ intelligence systems that can learn and apply a wide range of skills all at once, and thereby have a transformative effect on society.

If aligned with the goals of their owners, such general AI models could operate like a team of super-skilled assistants, going out and doing whatever wonderful (or malicious) things are asked of them. This might represent a huge leap forward for humanity, though the transition to a very different new economy and power structure would have to be handled delicately.

If unaligned with the goals of their owners or humanity as a whole, such broadly capable models would naturally ‘go rogue,’ breaking their way into additional computer systems to grab more computing power — all the better to pursue their goals and make sure they can’t be shut off.

As Nova explains, in either case, we don’t want such models disseminated all over the world before we’ve confirmed they are deeply safe and law-abiding, and have figured out how to integrate them peacefully into society. In the first scenario, premature mass deployment would be risky and destabilising. In the second scenario, it could be catastrophic — perhaps even leading to human extinction if such general AI systems turn out to be able to self-improve rapidly rather than slowly, something we can only speculate on at this point.

If highly capable general AI systems are coming in the next 10 or 20 years, Nova may be flying below the radar with one of the most important jobs in the world.

We’ll soon need the ability to ‘sandbox’ (i.e. contain) models with a wide range of superhuman capabilities, including the ability to learn new skills, for a period of careful testing and limited deployment — preventing the model from breaking out, and criminals from breaking in. Nova and her colleagues are trying to figure out how to do this, but as this episode reveals, even the state of the art is nowhere near good enough.

Chapters:

  • Cold open (00:00:00)
  • Rob's intro (00:00:52)
  • The interview begins (00:02:44)
  • Why computer security matters for AI safety (00:07:39)
  • State of the art in information security (00:17:21)
  • The hack of Nvidia (00:26:50)
  • The most secure systems that exist (00:36:27)
  • Formal verification (00:48:03)
  • How organisations can protect against hacks (00:54:18)
  • Is ML making security better or worse? (00:58:11)
  • Motivated 14-year-old hackers (01:01:08)
  • Disincentivising actors from attacking in the first place (01:05:48)
  • Hofvarpnir Studios (01:12:40)
  • Capabilities vs safety (01:19:47)
  • Interesting design choices with big ML models (01:28:44)
  • Nova’s work and how she got into it (01:45:21)
  • Anthropic and career advice (02:05:52)
  • $600M Ethereum hack (02:18:37)
  • Personal computer security advice (02:23:06)
  • LastPass (02:31:04)
  • Stuxnet (02:38:07)
  • Rob's outro (02:40:18)

Producer: Keiran Harris
Audio mastering: Ben Cordell and Beppe Rådvik
Transcriptions: Katy Moore

Jaksot(297)

#121 – Matthew Yglesias on avoiding the pundit's fallacy and how much military intervention can be used for good

#121 – Matthew Yglesias on avoiding the pundit's fallacy and how much military intervention can be used for good

If you read polls saying that the public supports a carbon tax, should you believe them? According to today's guest — journalist and blogger Matthew Yglesias — it's complicated, but probably not. Links to learn more, summary and full transcript. Interpreting opinion polls about specific policies can be a challenge, and it's easy to trick yourself into believing what you want to believe. Matthew invented a term for a particular type of self-delusion called the 'pundit's fallacy': "the belief that what a politician needs to do to improve his or her political standing is do what the pundit wants substantively." If we want to advocate not just for ideas that would be good if implemented, but ideas that have a real shot at getting implemented, we should do our best to understand public opinion as it really is. The least trustworthy polls are published by think tanks and advocacy campaigns that would love to make their preferred policy seem popular. These surveys can be designed to nudge respondents toward the desired result — for example, by tinkering with question wording and order or shifting how participants are sampled. And if a poll produces the 'wrong answer', there's no need to publish it at all, so the 'publication bias' with these sorts of surveys is large. Matthew says polling run by firms or researchers without any particular desired outcome can be taken more seriously. But the results that we ought to give by far the most weight are those from professional political campaigns trying to win votes and get their candidate elected because they have both the expertise to do polling properly, and a very strong incentive to understand what the public really thinks. The problem is, campaigns run these expensive surveys because they think that having exclusive access to reliable information will give them a competitive advantage. As a result, they often don’t publish the findings, and instead use them to shape what their candidate says and does. Journalists like Matthew can call up their contacts and get a summary from people they trust. But being unable to publish the polling itself, they're unlikely to be able to persuade sceptics. When assessing what ideas are winners, one thing Matthew would like everyone to keep in mind is that politics is competitive, and politicians aren't (all) stupid. If advocating for your pet idea were a great way to win elections, someone would try it and win, and others would copy. One other thing to check that's more reliable than polling is real-world experience. For example, voters may say they like a carbon tax on the phone — but the very liberal Washington State roundly rejected one in ballot initiatives in 2016 and 2018. Of course you may want to advocate for what you think is best, even if it wouldn't pass a popular vote in the face of organised opposition. The public's ideas can shift, sometimes dramatically and unexpectedly. But at least you'll be going into the debate with your eyes wide open. In this extensive conversation, host Rob Wiblin and Matthew also cover: • How should a humanitarian think about US military interventions overseas? • From an 'effective altruist' perspective, was the US wrong to withdraw from Afghanistan? • Has NATO ultimately screwed over Ukrainians by misrepresenting the extent of its commitment to their independence? • What philosopher does Matthew think is underrated? • How big a risk is ubiquitous surveillance? • What does Matthew think about wild animal suffering, anti-ageing research, and autonomous weapons? • And much more Chapters:Rob’s intro (00:00:00)The interview begins (00:02:05)Autonomous weapons (00:04:42)India and the US (00:07:25)Evidence-backed interventions for reducing the harm done by racial prejudices (00:08:38)Factory farming (00:10:44)Wild animal suffering (00:12:41)Vaccine development (00:15:20)Anti-ageing research (00:16:27)Should the US develop a semiconductor industry? (00:19:13)What we should do about various existential risks (00:21:58)What governments should do to stop the next pandemic (00:24:00)Comets and supervolcanoes (00:31:30)Nuclear weapons (00:34:25)Advances in AI (00:35:46)Surveillance systems (00:38:45)How Matt thinks about public opinion research (00:43:22)Issues with trusting public opinion polls (00:51:18)The influence of prior beliefs (01:05:53)Loss aversion (01:12:19)Matt's take on military adventurism (01:18:54)How military intervention looks as a humanitarian intervention (01:29:12)Where Matt does favour military intervention (01:38:27)Why smart people disagree (01:44:24)The case for NATO taking an active stance in Ukraine (01:57:34)One Billion Americans (02:08:02)Matt’s views on the effective altruism community (02:11:46)Matt’s views on the longtermist community (02:19:48)Matt’s struggle to become more of a rationalist (02:22:42)Megaprojects (02:26:20)The impact of Matt’s work (02:32:28)Matt’s philosophical views (02:47:58)The value of formal education (02:56:59)Worst thing Matt’s ever advocated for (03:02:25)Rob’s outro (03:03:22)Producer: Keiran HarrisAudio mastering: Ben CordellTranscriptions: Katy Moore

16 Helmi 20223h 4min

#120 – Audrey Tang on what we can learn from Taiwan’s experiments with how to do democracy

#120 – Audrey Tang on what we can learn from Taiwan’s experiments with how to do democracy

In 2014 Taiwan was rocked by mass protests against a proposed trade agreement with China that was about to be agreed without the usual Parliamentary hearings. Students invaded and took over the Parliament. But rather than chant slogans, instead they livestreamed their own parliamentary debate over the trade deal, allowing volunteers to speak both in favour and against.Instead of polarising the country more, this so-called 'Sunflower Student Movement' ultimately led to a bipartisan consensus that Taiwan should open up its government. That process has gradually made it one of the most communicative and interactive administrations anywhere in the world.Today's guest — programming prodigy Audrey Tang — initially joined the student protests to help get their streaming infrastructure online. After the students got the official hearings they wanted and went home, she was invited to consult for the government. And when the government later changed hands, she was invited to work in the ministry herself.Links to learn more, summary and full transcript. During six years as the country's 'Digital Minister' she has been helping Taiwan increase the flow of information between institutions and civil society and launched original experiments trying to make democracy itself work better. That includes developing new tools to identify points of consensus between groups that mostly disagree, building social media platforms optimised for discussing policy issues, helping volunteers fight disinformation by making their own memes, and allowing the public to build their own alternatives to government websites whenever they don't like how they currently work. As part of her ministerial role Audrey also sets aside time each week to help online volunteers working on government-related tech projects get the help they need. How does she decide who to help? She doesn't — that decision is made by members of an online community who upvote the projects they think are best. According to Audrey, a more collaborative mentality among the country's leaders has helped increase public trust in government, and taught bureaucrats that they can (usually) trust the public in return. Innovations in Taiwan may offer useful lessons to people who want to improve humanity's ability to make decisions and get along in large groups anywhere in the world. We cover: • Why it makes sense to treat Facebook as a nightclub • The value of having no reply button, and of getting more specific when you disagree • Quadratic voting and funding • Audrey’s experiences with the Sunflower Student Movement • Technologies Audrey is most excited about • Conservative anarchism • What Audrey’s day-to-day work looks like • Whether it’s ethical to eat oysters • And much more Chapters:Rob’s intro (00:00:00)The interview begins (00:02:04)Global crisis of confidence in government (00:07:06)Treating Facebook as a nightclub (00:10:55)Polis (00:13:48)The value of having no reply button (00:24:33)The value of getting more specific (00:26:13)Concerns with Polis (00:30:40)Quadratic voting and funding (00:42:16)Sunflower Student Movement (00:55:24)Promising technologies (01:05:44)Conservative anarchism (01:22:21)What Audrey’s day-to-day work looks like (01:33:54)Taiwanese politics (01:46:03)G0v (01:50:09)Rob’s outro (02:05:09)Producer: Keiran HarrisAudio mastering: Ben CordellTranscriptions: Katy Moore

2 Helmi 20222h 5min

#43 Classic episode - Daniel Ellsberg on the institutional insanity that maintains nuclear doomsday machines

#43 Classic episode - Daniel Ellsberg on the institutional insanity that maintains nuclear doomsday machines

Rebroadcast: this episode was originally released in September 2018.In Stanley Kubrick’s iconic film Dr. Strangelove, the American president is informed that the Soviet Union has created a secret deterrence system which will automatically wipe out humanity upon detection of a single nuclear explosion in Russia. With US bombs heading towards the USSR and unable to be recalled, Dr Strangelove points out that “the whole point of this Doomsday Machine is lost if you keep it a secret – why didn’t you tell the world, eh?” The Soviet ambassador replies that it was to be announced at the Party Congress the following Monday: “The Premier loves surprises”. Daniel Ellsberg - leaker of the Pentagon Papers which helped end the Vietnam War and Nixon presidency - claims in his book The Doomsday Machine: Confessions of a Nuclear War Planner that Dr. Strangelove might as well be a documentary. After attending the film in Washington DC in 1964, he and a colleague wondered how so many details of their nuclear planning had leaked. Links to learn more, summary and full transcript. The USSR did in fact develop a doomsday machine, Dead Hand, which probably remains active today. If the system can’t contact military leaders, it checks for signs of a nuclear strike, and if it detects them, automatically launches all remaining Soviet weapons at targets across the northern hemisphere. As in the film, the Soviet Union long kept Dead Hand completely secret, eliminating any strategic benefit, and rendering it a pointless menace to humanity. You might think the United States would have a more sensible nuclear launch policy. You’d be wrong. As Ellsberg explains, based on first-hand experience as a nuclear war planner in the 50s, that the notion that only the president is able to authorize the use of US nuclear weapons is a carefully cultivated myth. The authority to launch nuclear weapons is delegated alarmingly far down the chain of command – significantly raising the chance that a lone wolf or communication breakdown could trigger a nuclear catastrophe. The whole justification for this is to defend against a ‘decapitating attack’, where a first strike on Washington disables the ability of the US hierarchy to retaliate. In a moment of crisis, the Russians might view this as their best hope of survival. Ostensibly, this delegation removes Russia’s temptation to attempt a decapitating attack – the US can retaliate even if its leadership is destroyed. This strategy only works, though, if the tell the enemy you’ve done it. Instead, since the 50s this delegation has been one of the United States most closely guarded secrets, eliminating its strategic benefit, and rendering it another pointless menace to humanity. Strategically, the setup is stupid. Ethically, it is monstrous. So – how was such a system built? Why does it remain to this day? And how might we shrink our nuclear arsenals to the point they don’t risk the destruction of civilization? Daniel explores these questions eloquently and urgently in his book. Today we cover: • Why full disarmament today would be a mistake and the optimal number of nuclear weapons to hold • How well are secrets kept in the government? • What was the risk of the first atomic bomb test? • Do we have a reliable estimate of the magnitude of a ‘nuclear winter’? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.

18 Tammi 20222h 35min

#35 Classic episode - Tara Mac Aulay on the audacity to fix the world without asking permission

#35 Classic episode - Tara Mac Aulay on the audacity to fix the world without asking permission

Rebroadcast: this episode was originally released in June 2018. How broken is the world? How inefficient is a typical organisation? Looking at Tara Mac Aulay’s life, the answer seems to be ‘very’. At 15 she took her first job - an entry-level position at a chain restaurant. Rather than accept her place, Tara took it on herself to massively improve the store’s shambolic staff scheduling and inventory management. After cutting staff costs 30% she was quickly promoted, and at 16 sent in to overhaul dozens of failing stores in a final effort to save them from closure. That’s just the first in a startling series of personal stories that take us to a hospital drug dispensary where pharmacists are wasting a third of their time, a chemotherapy ward in Bhutan that’s killing its patients rather than saving lives, and eventually the Centre for Effective Altruism, where Tara becomes CEO and leads it through start-up accelerator Y Combinator. In this episode Tara shows how the ability to do practical things, avoid major screw-ups, and design systems that scale, is both rare and precious. Links to learn more, summary and full transcript. People with an operations mindset spot failures others can't see and fix them before they bring an organisation down. This kind of resourcefulness can transform the world by making possible critical projects that would otherwise fall flat on their face. But as Tara's experience shows they need to figure out what actually motivates the authorities who often try to block their reforms. We explore how people with this skillset can do as much good as possible, what 80,000 Hours got wrong in our article 'Why operations management is one of the biggest bottlenecks in effective altruism’, as well as: • Tara’s biggest mistakes and how to deal with the delicate politics of organizational reform. • How a student can save a hospital millions with a simple spreadsheet model. • The sociology of Bhutan and how medicine in the developing world often makes things worse rather than better. • What most people misunderstand about operations, and how to tell if you have what it takes. • And finally, operations jobs people should consider applying for. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: search for '80,000 Hours' in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.

10 Tammi 20221h 23min

#67 Classic episode – David Chalmers on the nature and ethics of consciousness

#67 Classic episode – David Chalmers on the nature and ethics of consciousness

Rebroadcast: this episode was originally released in December 2019. What is it like to be you right now? You're seeing this text on the screen, smelling the coffee next to you, and feeling the warmth of the cup. There’s a lot going on in your head — your conscious experience. Now imagine beings that are identical to humans, but for one thing: they lack this conscious experience. If you spill your coffee on them, they’ll jump like anyone else, but inside they'll feel no pain and have no thoughts: the lights are off. The concept of these so-called 'philosophical zombies' was popularised by today’s guest — celebrated philosophy professor David Chalmers — in order to explore the nature of consciousness. In a forthcoming book he poses a classic 'trolley problem': "Suppose you have a conscious human on one train track, and five non-conscious humanoid zombies on another. If you do nothing, a trolley will hit and kill the conscious human. If you flip a switch to redirect the trolley, you can save the conscious human, but in so doing kill the five non-conscious humanoid zombies. What should you do?" Many people think you should divert the trolley, precisely because the lack of conscious experience means the moral status of the zombies is much reduced or absent entirely. So, which features of consciousness qualify someone for moral consideration? One view is that the only conscious states that matter are those that have a positive or negative quality, like pleasure and suffering. But Dave’s intuitions are quite different. Links to learn more, summary and full transcript. Instead of zombies he asks us to consider 'Vulcans', who can see and hear and reflect on the world around them, but are incapable of experiencing pleasure or pain. Now imagine a further trolley problem: suppose you have a normal human on one track, and five Vulcans on the other. Should you divert the trolley to kill the five Vulcans in order to save the human? Dave firmly believes the answer is no, and if he's right, pleasure and suffering can’t be the only things required for moral status. The fact that Vulcans are conscious in other ways must matter in itself. Dave is one of the world's top experts on the philosophy of consciousness. He helped return the question 'what is consciousness?' to the centre stage of philosophy with his 1996 book 'The Conscious Mind', which argued against then-dominant materialist theories of consciousness. This comprehensive interview, at over four hours long, outlines each contemporary theory of consciousness, what they have going for them, and their likely ethical implications. Those theories span the full range from illusionism, the idea that consciousness is in some sense an 'illusion', to panpsychism, according to which it's a fundamental physical property present in all matter. These questions are absolutely central for anyone who wants to build a positive future. If insects were conscious our treatment of them could already be an atrocity. If computer simulations of people will one day be conscious, how will we know, and how should we treat them? And what is it about consciousness that matters, if anything? Dave Chalmers is probably the best person on the planet to ask these questions, and Rob & Arden cover this and much more over the course of what is both our longest ever episode, and our personal favourite so far. Get this episode by subscribing to our show on the world’s most pressing problems and how to solve them: search for 80,000 Hours in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.

3 Tammi 20224h 42min

#59 Classic episode - Cass Sunstein on how change happens, and why it's so often abrupt & unpredictable

#59 Classic episode - Cass Sunstein on how change happens, and why it's so often abrupt & unpredictable

Rebroadcast: this episode was originally released in June 2019. It can often feel hopeless to be an activist seeking social change on an obscure issue where most people seem opposed or at best indifferent to you. But according to a new book by Professor Cass Sunstein, they shouldn't despair. Large social changes are often abrupt and unexpected, arising in an environment of seeming public opposition. The Communist Revolution in Russia spread so swiftly it confounded even Lenin. Seventy years later the Soviet Union collapsed just as quickly and unpredictably. In the modern era we have gay marriage, #metoo and the Arab Spring, as well as nativism, Euroskepticism and Hindu nationalism. How can a society that so recently seemed to support the status quo bring about change in years, months, or even weeks? Sunstein — co-author of Nudge, Obama White House official, and by far the most cited legal scholar of the late 2000s — aims to unravel the mystery and figure out the implications in his new book How Change Happens. He pulls together three phenomena which social scientists have studied in recent decades: preference falsification, variable thresholds for action, and group polarisation. If Sunstein is to be believed, together these are a cocktail for social shifts that are chaotic and fundamentally unpredictable.Links to learn more, summary and full transcript. In brief, people constantly misrepresent their true views, even to close friends and family. They themselves aren't quite sure how socially acceptable their feelings would have to become, before they revealed them, or joined a campaign for social change. And a chance meeting between a few strangers can be the spark that radicalises a handful of people, who then find a message that can spread their views to millions. According to Sunstein, it's "much, much easier" to create social change when large numbers of people secretly or latently agree with you. But 'preference falsification' is so pervasive that it's no simple matter to figure out when that's the case. In today's interview, we debate with Sunstein whether this model of cultural change is accurate, and if so, what lessons it has for those who would like to shift the world in a more humane direction. We discuss: • How much people misrepresent their views in democratic countries. • Whether the finding that groups with an existing view tend towards a more extreme position would stand up in the replication crisis. • When is it justified to encourage your own group to polarise? • Sunstein's difficult experiences as a pioneer of animal rights law. • Whether activists can do better by spending half their resources on public opinion surveys. • Should people be more or less outspoken about their true views? • What might be the next social revolution to take off? • How can we learn about social movements that failed and disappeared? • How to find out what people really think. Get this episode by subscribing to our podcast on the world’s most pressing problems: type 80,000 Hours into your podcasting app. Or read the transcript on our site. The 80,000 Hours Podcast is produced by Keiran Harris.

27 Joulu 20211h 43min

#119 – Andrew Yang on our very long-term future, and other topics most politicians won’t touch

#119 – Andrew Yang on our very long-term future, and other topics most politicians won’t touch

Andrew Yang — past presidential candidate, founder of the Forward Party, and leader of the 'Yang Gang' — is kind of a big deal, but is particularly popular among listeners to The 80,000 Hours Podcast. Maybe that's because he's willing to embrace topics most politicians stay away from, like universal basic income, term limits for members of Congress, or what might happen when AI replaces whole industries. Links to learn more, summary and full transcript. But even those topics are pretty vanilla compared to our usual fare on The 80,000 Hours Podcast. So we thought it’d be fun to throw Andrew some stranger or more niche questions we hadn't heard him comment on before, including: 1. What would your ideal utopia in 500 years look like? 2. Do we need more public optimism today? 3. Is positively influencing the long-term future a key moral priority of our time? 4. Should we invest far more to prevent low-probability risks? 5. Should we think of future generations as an interest group that's disenfranchised by their inability to vote? 6. The folks who worry that advanced AI is going to go off the rails and destroy us all... are they crazy, or a valuable insurance policy? 7. Will people struggle to live fulfilling lives once AI systems remove the economic need to 'work'? 8. Andrew is a huge proponent of ranked-choice voting. But what about 'approval voting' — where basically you just get to say “yea” or “nay” to every candidate that's running — which some experts prefer? 9. What would Andrew do with a billion dollars to keep the US a democracy? 10. What does Andrew think about the effective altruism community? 11. What's one thing we should do to reduce the risk of nuclear war? 12. Will Andrew's new political party get Trump elected by splitting the vote, the same way Nader got Bush elected back in 2000? As it turns out, Rob and Andrew agree on a lot, so the episode is less a debate than a chat about ideas that aren’t mainstream yet... but might be one day. They also talk about: • Andrew’s views on alternative meat • Whether seniors have too much power in American society • Andrew’s DC lobbying firm on behalf of humanity • How the rest of the world could support the US • The merits of 18-year term limits • What technologies Andrew is most excited about • How much the US should spend on foreign aid • Persistence and prevalence of inflation in the US economy • And plenty more Chapters:Rob’s intro (00:00:00)The interview begins (00:01:38)Andrew’s hopes for the year 2500 (00:03:10)Tech over the next century (00:07:03)Utopia for realists (00:10:41)Most likely way humanity fails (00:12:43)What Andrew would do with a billion dollars (00:14:44)Approval voting vs. ranked-choice voting (00:19:51)The worry that third party candidates could cause harm (00:21:12)Investment in existential risk reduction (00:25:18)Future generations as a disenfranchised interest group (00:30:37)Humanity Forward (00:32:05)Best way the rest of the world could support the US (00:37:17)Recent advances in AI (00:39:56)Artificial general intelligence (00:46:38)The Windfall Clause (00:49:39)The alignment problem (00:53:02)18-year term limits (00:56:21)Effective altruism and longtermism (01:00:44)Persistence and prevalence of inflation in the US economy (01:01:25)Downsides of policies Andrew advocates for (01:02:08)What Andrew would have done differently with COVID (01:04:54)Fighting for attention in the media (01:09:25)Right ballpark level of foreign aid for the US (01:11:15)Government science funding (01:11:58)Nuclear weapons policy (01:15:06)US-China relationship (01:16:20)Human challenge trials (01:18:59)Forecasting accuracy (01:20:17)Upgrading public schools (01:21:41)Producer: Keiran HarrisAudio mastering: Ben CordellTranscriptions: Katy Moore

20 Joulu 20211h 25min

#118 – Jaime Yassif on safeguarding bioscience to prevent catastrophic lab accidents and bioweapons development

#118 – Jaime Yassif on safeguarding bioscience to prevent catastrophic lab accidents and bioweapons development

If a rich country were really committed to pursuing an active biological weapons program, there’s not much we could do to stop them. With enough money and persistence, they’d be able to buy equipment, and hire people to carry out the work. But what we can do is intervene before they make that decision. Today’s guest, Jaime Yassif — Senior Fellow for global biological policy and programs at the Nuclear Threat Initiative (NTI) — thinks that stopping states from wanting to pursue dangerous bioscience in the first place is one of our key lines of defence against global catastrophic biological risks (GCBRs). Links to learn more, summary and full transcript. It helps to understand why countries might consider developing biological weapons. Jaime says there are three main possible reasons: 1. Fear of what their adversary might be up to 2. Belief that they could gain a tactical or strategic advantage, with limited risk of getting caught 3. Belief that even if they are caught, they are unlikely to be held accountable In response, Jaime has developed a three-part recipe to create systems robust enough to meaningfully change the cost-benefit calculation. The first is to substantially increase transparency. If countries aren’t confident about what their neighbours or adversaries are actually up to, misperceptions could lead to arms races that neither side desires. But if you know with confidence that no one around you is pursuing a biological weapons programme, you won’t feel motivated to pursue one yourself. The second is to strengthen the capabilities of the United Nations’ system to investigate the origins of high-consequence biological events — whether naturally emerging, accidental or deliberate — and to make sure that the responsibility to figure out the source of bio-events of unknown origin doesn’t fall between the cracks of different existing mechanisms. The ability to quickly discover the source of emerging pandemics is important both for responding to them in real time and for deterring future bioweapons development or use. And the third is meaningful accountability. States need to know that the consequences for getting caught in a deliberate attack are severe enough to make it a net negative in expectation to go down this road in the first place. But having a good plan and actually implementing it are two very different things, and today’s episode focuses heavily on the practical steps we should be taking to influence both governments and international organisations, like the WHO and UN — and to help them maximise their effectiveness in guarding against catastrophic biological risks. Jaime and Rob explore NTI’s current proposed plan for reducing global catastrophic biological risks, and discuss: • The importance of reducing emerging biological risks associated with rapid technology advances • How we can make it a lot harder for anyone to deliberately or accidentally produce or release a really dangerous pathogen • The importance of having multiples theories of risk reduction • Why Jaime’s more focused on prevention than response • The history of the Biological Weapons Convention • Jaime’s disagreements with the effective altruism community • And much more And if you might be interested in dedicating your career to reducing GCBRs, stick around to the end of the episode to get Jaime’s advice — including on how people outside of the US can best contribute, and how to compare career opportunities in academia vs think tanks, and nonprofits vs national governments vs international orgs. Chapters:Rob’s intro (00:00:00)The interview begins (00:02:32)Categories of global catastrophic biological risks (00:05:24)Disagreements with the effective altruism community (00:07:39)Stopping the first person from getting infected (00:11:51)Shaping intent (00:15:51)Verification and the Biological Weapons Convention (00:25:31)Attribution (00:37:15)How to actually implement a new idea (00:50:54)COVID-19: natural pandemic or lab leak? (00:53:31)How much can we rely on traditional law enforcement to detect terrorists? (00:58:20)Constraining capabilities (01:01:24)The funding landscape (01:06:56)Oversight committees (01:14:20)Just winning the argument (01:20:17)NTI’s vision (01:27:39)Suppliers of goods and services (01:33:24)Publishers (01:39:41)Biggest weaknesses of NTI platform (01:42:29)Careers (01:48:31)How people outside of the US can best contribute (01:54:10)Academia vs think tanks vs nonprofits vs government (01:59:21)International cooperation (02:05:40)Best things about living in the US, UK, China, and Israel (02:11:16)Producer: Keiran HarrisAudio mastering: Ryan KesslerTranscriptions: Katy Moore

13 Joulu 20212h 15min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
voi-hyvin-meditaatiot-2
rss-vegaaneista-tykkaan
aamukahvilla
rss-narsisti
rss-valo-minussa-2
psykologia
rss-duodecim-lehti
adhd-tyylilla
rss-vapaudu-voimaasi
adhd-podi
aloita-meditaatio
jari-sarasvuo-podcast
rss-tripsteri
rss-koira-haudattuna
queen-talk
dear-ladies
rss-uskonto-on-tylsaa
rss-laadukasta-ensihoitoa