Calm. Methodical. Evidence-Based.

Norms Impact

Pentagon follows through with its threat, labels Anthropic a supply chain risk

The Pentagon used a foreign-adversary supply-chain weapon against a domestic company, turning national-security procurement power into a tool to punish refusal to relax safeguards.

Executive

Mar 6, 2026

Sources

Summary

The Pentagon labeled Anthropic a supply chain risk under authorities meant to counter foreign-adversary threats, effective immediately. This applies a national-security procurement tool to a domestic firm after a dispute over safeguards tied to surveillance and autonomous weapons. The designation reshapes defense AI acquisition by blacklisting a U.S. company and clearing space for rivals in classified military environments.

Reality Check

When national-security procurement authorities built to block foreign infiltration are repurposed against domestic firms, we weaken the guardrails that separate security from coercion. This precedent invites agencies to use exclusionary designations to discipline policy disagreements, chilling independent corporate judgment and narrowing the range of safe, lawful technologies government can access. Over time, it conditions the public to accept opaque, high-impact bans as normal administrative behavior, eroding accountability and concentrating discretionary power inside the executive branch.

Detail

<p>The Department of Defense designated Anthropic a “supply chain risk,” invoking a rule aimed at threats where an adversary could sabotage, introduce unwanted functions, or subvert systems to disrupt, degrade, or spy. The designation was applied to a domestic U.S. company and took effect immediately.</p><p>U.S. Sen. Kirsten Gillibrand, a member of the Senate Armed Services and Senate Intelligence Committees, criticized the action as misuse of an authority intended for adversary-controlled technology. A group of former defense and national security officials, including former CIA director Michael Hayden and retired military leaders, sent a letter to lawmakers warning the move departs from the authority’s intended purpose and sets a precedent.</p><p>Following the Pentagon’s action last Friday, OpenAI announced a deal to replace Anthropic with ChatGPT in classified military environments. Anthropic reported a surge in consumer sign-ups for Claude during the week of the dispute.</p>