Two of OpenAI's most important security principles are that the company's AI models are not used for secret mass surveillance and that in military applications there is always a human who takes responsibility, even for autonomous weapons systems, writes OpenAI's CEO, Sam Altman, in a post on the social network X.
Anthropic threatens to sue the US
According to Altman, the Pentagon agrees with OpenAI on these principles, which he says reflect US law, how operations are politically governed, and also what is stated in the agreement between the government and OpenAI.
Altman's post on X comes just hours after competitor Anthropic threatened to sue the US.
The lawsuit threat comes after the Trump administration announced that US authorities are no longer allowed to use the company's products, and the Pentagon has labeled the company a "risk", which means other Pentagon suppliers are also barred from using Anthropic's products.
This could have repercussions for the AI tool Maven Smart System, which defense firm Palantir Technologies has sold to the Pentagon. Palantir already settled with Anthropic in the fall of 2024 regarding the use of AI tools.
"No amount of threats or punishment from the Department of Defense will change our position on domestic mass surveillance or fully autonomous weapons," Anthropic wrote in a statement.
“A dangerous precedent”
Anthropic - which has been valued at $380 billion in the global AI race - claims that the Trump administration's blacklisting violates legal principles and is "a dangerous precedent" for all companies negotiating with the government.
Previously, the type of blacklisting that has now hit Anthropic - following the recent conflict over how the US military is allowed to use the company's tools - has only been used against foreign companies, such as China's Huawei.
Anthropic, whose most popular product is the chatbot Claude, was until recently the only AI vendor that could be used in the Pentagon's classified network.





