Calm. Methodical. Evidence-Based.

Norms Impact

Anthropic ditches its core safety promise in the middle of an AI red line fight with the Pentagon | CNN Business

A federal contracting ultimatum collides with a private company’s retreat from hard AI safety commitments, normalizing government leverage over guardrails that protect citizens from surveillance and misuse.

Executive

Feb 25, 2026

Sources

Summary

Anthropic replaced its prior Responsible Scaling Policy with a nonbinding safety framework and removed its commitment to pause training when model capabilities outstrip the company’s ability to control them.
The shift moves AI “safety” from enforceable internal guardrails to publicly stated goals the company says it can change, as the Pentagon pressures it over contract access and red lines.
In practice, a company that branded itself around restraint is now signaling it will keep building more powerful systems without the former stop-trigger, even while resisting demands tied to weapons control and mass domestic surveillance.

Reality Check

When a Cabinet secretary ties a $200 million contract and an “effective government blacklist” to rolling back safeguards, we are watching coercive procurement power get repurposed as policy enforcement outside democratic process, and it will not stop with AI companies. The conduct described risks crossing into criminal territory if it functions as an unlawful quid pro quo or extortion under color of official right—potentially implicating the Hobbs Act (18 U.S.C. § 1951) and federal bribery/gratuities frameworks (18 U.S.C. § 201)—and even short of that, it is a textbook abuse-of-office pressure tactic. Threatening to invoke the Defense Production Act and label a firm a “supply chain risk” to force weakened safeguards is the kind of weaponized discretion that chills dissent and weakens our ability to demand limits on mass domestic surveillance. If this becomes routine, our rights become negotiable terms in federal contracts rather than binding constraints on government power.

Media

Detail

<p>Anthropic announced in a Tuesday blog post that it is changing its two-year-old Responsible Scaling Policy, replacing self-imposed guardrails with a more flexible, nonbinding framework it says “can and will change.” The company said shortcomings in the prior policy could hinder its ability to compete in a rapidly growing AI market and that industry peers did not adopt similar constraints.</p><p>The new approach includes a “Frontier Safety Roadmap” describing the company’s guidelines and safeguards, which it characterized as “public goals” it will “openly grade” rather than “hard commitments.” Anthropic also said it will separate its own safety plans from its recommendations for the broader AI industry.</p><p>The change followed a meeting with Defense Secretary Pete Hegseth and came a day after Hegseth reportedly set a Friday deadline for Anthropic CEO Dario Amodei to roll back safeguards or risk losing a $200 million Pentagon contract and being placed on what was described as an effective government blacklist. A source said Anthropic is unwilling to drop concerns about AI-controlled weapons and mass domestic surveillance of American citizens, and Hegseth is reported to be considering invoking the Defense Production Act and designating the company a supply chain risk.</p>