“Cancel ChatGPT” movement goes big after OpenAI
A federal ban labeled “supply chain risk” is being used to punish an AI vendor for refusing surveillance and weapons use—shrinking the space for independent constraints on government power.
Feb 28, 2026
Sources
Summary
Anthropic was designated a supply chain risk and banned from use in U.S. government agencies after it refused to allow its Claude AI to be used for autonomous weapons or mass surveillance of U.S. citizens. The U.S. government moved to exclude a major AI vendor over usage constraints while OpenAI aligned itself with government discretion on deployment. The practical consequence is an expanded pathway for federal adoption of large language models under “all lawful means,” including surveillance authorities that can reach Americans.
Reality Check
When agencies can exclude vendors for insisting on limits against surveillance and weaponization, the precedent pressures private actors to accept government-defined “lawful” deployment as the only viable path. That collapses an external guardrail on state power by converting procurement leverage into a compliance tool. Over time, it conditions our institutions to treat expansive surveillance authorities as ordinary operational defaults rather than exceptional powers requiring strict restraint.
Media
Detail
<p>Anthropic stated that it was designated a “supply chain risk” and was subsequently banned from use in U.S. governmental agencies after it declined to provide its Claude model for two uses: autonomous weapons and mass surveillance of United States citizens. The account attributes the decision to a conflict between Anthropic’s requested limits and the government’s demand for “full capabilities.”</p><p>In the same sequence of events, OpenAI CEO Sam Altman publicly pledged ChatGPT and other OpenAI technologies for U.S. Department of War use. Altman said OpenAI’s models would not be used for mass surveillance, but a U.S. government official was quoted as stating the models would be used by “all lawful means.” The account notes that, under parts of the post-9/11 Patriot Act, certain forms of mass harvesting of communications metadata can be lawful in some scenarios.</p><p>The reported actions triggered online backlash, including posts from users claiming they were unsubscribing from ChatGPT.</p>