Anthropic stated that it was designated a âsupply chain riskâ and was subsequently banned from use in U.S. governmental agencies after it declined to provide its Claude model for two uses: autonomous weapons and mass surveillance of United States citizens. The account attributes the decision to a conflict between Anthropicâs requested limits and the governmentâs demand for âfull capabilities.â
In the same sequence of events, OpenAI CEO Sam Altman publicly pledged ChatGPT and other OpenAI technologies for U.S. Department of War use. Altman said OpenAIâs models would not be used for mass surveillance, but a U.S. government official was quoted as stating the models would be used by âall lawful means.â The account notes that, under parts of the post-9/11 Patriot Act, certain forms of mass harvesting of communications metadata can be lawful in some scenarios.
The reported actions triggered online backlash, including posts from users claiming they were unsubscribing from ChatGPT.