“Responsible Scaling Policy v3” by Holden Karnofsky

“Responsible Scaling Policy v3” by Holden Karnofsky

All views are my own, not Anthropic's. This post assumes Anthropic's announcement of RSP v3.0 as background.

Today, Anthropic released its Responsible Scaling Policy 3.0. The official announcement discusses the high-level thinking behind it. This is a more detailed post giving my own takes on the update.

First, the big picture:

  • I expect some people will be upset about the move away from a “hard commitments”/”binding ourselves to the mast” vibe. (Anthropic has always had the ability to revise the RSP, and we’ve always had language in there specifically flagging that we might revise away key commitments in a situation where other AI developers aren’t adhering to similar commitments. But it's been easy to get the impression that the RSP is “binding ourselves to the mast” and committing to unilaterally pause AI development and deployment under some conditions, and Anthropic is responsible for that.)
  • I take significant responsibility for this change. I have been pushing for this change for about a year now, and have led the way in developing the new RSP. I am in favor of nearly everything about the changes we’re making. I am excited about the Roadmap, the Risk Reports, the move toward external [...]

---

Outline:

(05:32) How it started: the original goals of RSPs

(11:25) How its going: the good and the bad

(11:51) A note on my general orientation toward this topic

(14:56) Goal 1: forcing functions for improved risk mitigations

(15:02) A partial success story: robustness to jailbreaks for particular uses of concern, in line with the ASL-3 deployment standard

(18:24) A mixed success/failure story: impact on information security

(20:42) ASL-4 and ASL-5 prep: the wrong incentives

(25:00) When forcing functions do and dont work well

(27:52) Goal 2 (testbed for practices and policies that can feed into regulation)

(29:24) Goal 3 (working toward consensus and common knowledge about AI risks and potential mitigations)

(30:59) RSP v3s attempt to amplify the good and reduce the bad

(36:01) Do these benefits apply only to the most safety-oriented companies?

(37:40) A revised, but not overturned, vision for RSPs

(39:08) Q&A

(39:10) On the move away from implied unilateral commitments

(39:15) Is RSP v3 proactively sending a race-to-the-bottom signal? Why be the first company to explicitly abandon the high ambition for achieving low levels of risk?

(40:34) How sure are you that a voluntary industry-wide pause cant happen? Are you worried about signaling that youll be the first to defect in a prisoners dilemma?

(42:03) How sure are you that you cant actually sprint to achieve the level of information security, alignment science understanding, and deployment safeguards needed to make arbitrarily powerful AI systems low-risk?

(43:49) What message will this change send to regulators? Will it make ambitious regulation less likely by making companies commitments to low risk look less serious?

(45:10) Why did you have to do this now - couldnt you have waited until the last possible moment to make this change, in case the more ambitious risk mitigations ended up working out?

(46:03) Could you have drafted the new RSP, then waited until you had to invoke your escape clause and introduced it then? Or introduced the new RSP as what we will do if we invoke our escape clause?

(47:29) The new Risk Reports and Roadmap are nice, but couldnt you have put them out without also making the key revision of moving away from unilateral commitments?

(48:26) Why isnt a unilateral pause a good idea? It could be a big credible signal of danger, which could lead to policy action.

(49:37) Could a unilateral pause ever be a good idea? Why not commit to a unilateral pause in cases where it would be a good idea?

(50:31) Why didnt you communicate about the change differently? Im worried that the way you framed this will cause audience X to take away message Y.

(51:53) Why dont Anthropics and your communications about this have a more alarmed and/or disappointed vibe? I reluctantly concede that this revision makes sense on the merits, but Im sad about it. Arent you?

(53:19) On other components of the new RSP

(53:24) The new RSPs commitments related to competitors seem vague and weak. Could you add more and/or strengthen these? They dont seem sufficient as-is to provide strong assurance against a prisoners dilemma world where each relevant company wishes it could be more careful, but rushes due to pressure from others.

(55:29) Why is external review only required at an extreme capability level? Why not just require it now?

(58:06) The new commitments are mostly about Risk Reports and Roadmap - what stops companies from just making these really perfunctory?

(59:18) Why isnt the RSP more adversarially designed such that once a company adopts it, it will improve their practices even if nobody at the company values safety at all?

(01:00:18) What are the consequences of missing your Roadmap commitments? If they arent dire, will anyone care about them?

(01:00:29) OK, but does that apply to other companies too? How will Roadmaps force other companies to get things done?

(01:00:40) Why arent the recommendations for industry-wide safety more specific? Why is it built around safety cases instead of ASLs with specific lists of needed risk mitigations?

(01:02:06) What is the point of making commitments if you can revise them anytime?

---

First published:
February 24th, 2026

Source:
https://forum.effectivealtruism.org/posts/DGZNAGL2FNJfftwgE/responsible-scaling-policy-v3-1

---

Narrated by TYPE III AUDIO.

Episoder(250)

[Linkpost] “Starfish” by Aaron Gertler 🔸

[Linkpost] “Starfish” by Aaron Gertler 🔸

This is a link post. By Alexander Wales Thousands of starfish had washed up on the beach, and a little girl was diligently throwing them back into the water, one at a time. A man came up to the girl a...

1 Mai 4min

“Time Sensitive Urgent Animal Welfare Action” by Bentham’s Bulldog

“Time Sensitive Urgent Animal Welfare Action” by Bentham’s Bulldog

The EATS act—now called the save our bacon act—would make it illegal for states to pass animal welfare laws that apply to products produced out of state. This would gut most state level animal protect...

29 Apr 1min

“Forecasting is Way Overrated, and We Should Stop Funding It” by Marcus Abramovitch 🔸

“Forecasting is Way Overrated, and We Should Stop Funding It” by Marcus Abramovitch 🔸

Summary EA and rationalists got enamoured with forecasting and prediction markets and made them part of the culture, but this hasn’t proven very useful, yet it continues to receive substantial EA fun...

26 Apr 8min

“My lover, effective altruism” by Natalie_Cargill

“My lover, effective altruism” by Natalie_Cargill

Crossposted from Substack. This post is part of a 30-posts-in-30-days ordeal at Inkhaven. All suboptimalities are the result of that. This is part 2, here is part 1 in my EA mini series! On my way to ...

26 Apr 8min

“A Database of Near-Term Interventions for Wild Animals” by Bob Fischer

“A Database of Near-Term Interventions for Wild Animals” by Bob Fischer

The Animal Welfare Department (AWD) at Rethink Priorities supports high-impact strategies to help animals, especially where suffering is vast and largely neglected. Therefore, one of our focus areas i...

24 Apr 17min

“The AI people have been right a lot” by Dylan Matthews

“The AI people have been right a lot” by Dylan Matthews

This post was crossposted from Dylan Matthew's blog by the EA Forum team. The author may not see or reply to comments. Subtitle: Try to keep an open mind as the world gets increasingly wild.The crowd ...

20 Apr 10min

[Linkpost] “The Anthropic IPO Is Coming. We Aren’t Ready for It.” by Sophie Kim

[Linkpost] “The Anthropic IPO Is Coming. We Aren’t Ready for It.” by Sophie Kim

This is a link post. More money is coming than AI safety has ever seen. The capacity to deploy it doesn't exist yet.Image Source: Fortune.com This week, Anthropic announced Claude Mythos Preview– a mo...

17 Apr 15min

“AI Safety’s Biggest Talent Gap Isn’t Researchers. It’s Generalists.” by Topaz, Agustín Covarrubias 🔸, Alexandra Bates, Parv Mahajan, Kairos

“AI Safety’s Biggest Talent Gap Isn’t Researchers. It’s Generalists.” by Topaz, Agustín Covarrubias 🔸, Alexandra Bates, Parv Mahajan, Kairos

This post was cross posted to LessWrong TL;DR: One of the largest talent gaps in AI safety is competent generalists: program managers, fieldbuilders, operators, org leaders, chiefs of staff, founders....

16 Apr 13min

Populært innen Samfunn

rss-spartsklubben
giver-og-gjengen-vg
aftenpodden
aftenpodden-usa
alt-fortalt
konspirasjonspodden
popradet
intervjuet
rss-nesten-hele-uka-med-lepperod
rss-henlagt-andy-larsgaard
lydartikler-fra-aftenposten
wolfgang-wee-uncut
grenselos
rss-espen-lee-usensurert
synnve-og-vanessa
min-barneoppdragelse
rss-dannet-uten-piano
rss-dette-ma-aldri-skje-igjen
frokostshowet-pa-p5
fladseth