February 24, 2026
Date: February 24, 2026
Anthropic released Responsible Scaling Policy version 3.0 in February 2026, removing the company’s founding commitment — first written in 2023 — to never train or release an AI model unless it could guarantee in advance that safety measures were adequate. The change was reported by Time magazine in an exclusive and confirmed by Anthropic’s chief science officer Jared Kaplan, who told Time the company determined that unilateral commitments no longer made sense given competitive conditions. The new RSP replaces the hard if-then pause trigger with a commitment to publish frontier safety roadmaps, release risk reports every three to six months, and match or exceed the safety efforts of competitors. The policy was approved unanimously by CEO Dario Amodei and Anthropic’s board. Separately, the Wall Street Journal reported that Defense Secretary Pete Hegseth gave Anthropic a deadline of February 28, 2026 to open its AI technology for unrestricted military use or lose its Pentagon contract. Two weeks prior, the head of Anthropic’s safeguards research team, Mrinank Sharma, resigned publicly citing pressure to set aside safety priorities.
The RSP 3.0 change marks a structural shift in how Anthropic positions itself relative to the AI industry. The original 2023 policy was explicitly designed as an internal forcing function and an industry model — Anthropic hoped competitors would adopt similar frameworks, and that those frameworks would eventually inform binding regulations. Neither outcome fully materialized: no federal AI law is in place, the Trump administration has taken a permissive posture toward AI development, and no major competitor made a commitment as explicit as Anthropic’s original pledge. Kaplan’s public rationale — that pausing while competitors advance would produce a less safe world — reflects the competitive logic now governing every major AI lab. Critics including AI safety researcher Chris Painter of nonprofit Meter have flagged that the new policy is more gradual, meaning danger could accumulate without a clear threshold being crossed. For Coachella Valley businesses and institutions adopting AI tools, the practical implication is that the safety floor for AI products is now determined by competitive market dynamics rather than any company’s self-imposed hard limit. AICV’s 13 Principles of Responsible AI Use, published at AICoachellaValley.org, remain a locally applicable framework for organizations that want to establish their own standards.