Norms Impact
US-Israel war with Iran: OpenAI changes deal with US after backlash
A Pentagon-linked AI contract had to be retrofitted with a domestic-surveillance ban, exposing how classified procurement can bypass democratic scrutiny until public backlash forces limits.
Mar 3, 2026
⚖ Legal Exposure
Sources
Summary
OpenAI agreed to amend its classified-use agreement with the U.S. government after backlash over a Pentagon deal for military operations. The revisions add contractual language limiting domestic surveillance use and introduce an additional contract hurdle for intelligence-agency access. The change narrows how government entities can deploy a private AI system in secret programs and signals that public pressure can force post hoc guardrails onto national-security contracting.
Reality Check
Normalizing classified AI contracts without clear, enforceable limits invites a durable expansion of executive surveillance capacity with minimal public visibility. When guardrails are added only after backlash, we train our institutions to treat democratic constraints as optional “patches,” not baseline rules. Requiring a follow-on modification for intelligence-agency use shows how easily access can be widened through contract changes rather than open lawmaking. Over time, this shifts the boundary of state power toward secret delegation to private systems, weakening separation-of-powers oversight and public accountability.
Legal Summary
The article presents an ethics and governance issue: a rushed Pentagon deal for classified AI deployments that was amended after backlash to add explicit prohibitions on domestic surveillance and additional approval steps for intelligence-agency use. It signals recognized surveillance-risk concerns but does not allege actual unlawful surveillance or any transactional quid-pro-quo, personal enrichment, or corrupt official action.
Legal Analysis
<h3>5 C.F.R. Part 2635 (Standards of Ethical Conduct) — appearance/guardrails in federal contracting</h3><ul><li>The article describes an "opportunistic and sloppy" rushed agreement for classified AI deployments and subsequent amendments adding explicit prohibitions on domestic spying and requiring follow-on modification for intelligence-agency use, raising ethics/oversight concerns about adequate safeguards rather than a money-for-official-act exchange.</li><li>No facts indicate bribery, gratuities, kickbacks, or personal enrichment of officials; the core issue presented is responsible-use constraints and transparency after backlash.</li></ul><h3>18 U.S.C. § 2511 / FISA-related surveillance prohibitions — potential unlawful surveillance risk (not alleged as committed)</h3><ul><li>Altman’s addition of language explicitly prohibiting use “to spy on Americans” and limiting NSA-type use without further contract modification suggests recognized risk that, absent guardrails, deployments could facilitate domestic surveillance.</li><li>The article does not allege that OpenAI systems were actually used to conduct unlawful domestic surveillance; it describes contractual revisions intended to prevent such use.</li></ul><h3>31 U.S.C. § 1341 (Anti-Deficiency Act) / procurement compliance — procedural contracting risk (insufficient facts)</h3><ul><li>The narrative of a rushed classified-use deal later amended could implicate procurement documentation/authorization sufficiency concerns, but the article provides no details on funding, scope, or contracting irregularities.</li></ul><b>Conclusion:</b> The conduct described reflects ethics/oversight and safeguard adequacy concerns around classified AI deployment and potential surveillance misuse risk, not a prosecutable structural corruption pattern involving money-for-official-action based on the article’s facts.
Media
Detail
<p>OpenAI said it agreed to changes to its agreement with the U.S. government covering the use of its technology in classified military operations. On Monday, chief executive Sam Altman said the company would add language explicitly prohibiting intentional use of its systems for domestic surveillance of U.S. persons and nationals.</p><p>Altman said the amendments also require intelligence agencies, including the National Security Agency, to obtain a “follow-on modification” to the contract before using OpenAI’s system. He said the company rushed to release the agreement on Friday and described the rollout as a mistake.</p><p>The deal became public after a dispute between OpenAI rival Anthropic and the Department of Defense over concerns about use of Anthropic’s model for mass surveillance and fully autonomous weapons. OpenAI previously said its Pentagon agreement had more guardrails than any prior classified AI deployment agreement. OpenAI faced user backlash after announcing the Pentagon work, including a reported surge in ChatGPT mobile app uninstalls.</p>