In a tense week that reshaped the US government’s relationship with the AI industry, OpenAI has brokered a new Pentagon partnership while Anthropic stands defiantly alone, having been stripped of its government contracts for refusing to abandon ethical commitments on autonomous weapons and surveillance.
Anthropic’s conflict with the Defense Department was months in the making. The company offered broad support for lawful military uses of its Claude AI system but carved out two exceptions — use in autonomous lethal weapons and use in mass surveillance programs. Pentagon officials considered these exceptions unacceptable and pushed aggressively for their removal.
The Trump administration gave Anthropic its answer in blunt terms: President Trump personally announced a ban on all federal use of Anthropic technology and denounced the company in harsh language on Truth Social. The message to the rest of the industry was equally blunt — restrict our access and face consequences.
OpenAI moved quickly to announce a deal that, according to CEO Sam Altman, includes contractual protection against both mass surveillance and autonomous weapons use. Altman’s internal memo and public statements drew a direct line between his company’s values and Anthropic’s, suggesting the two companies are more aligned than the Pentagon’s treatment of each might suggest.
Nearly 500 workers across OpenAI and Google signed a joint letter warning against exactly the kind of industry division that was now unfolding. Anthropic, for its part, was unmoved by the loss of its government business, issuing a statement that its principles are not for sale at any price and that its ethical restrictions have never obstructed a single legitimate national security mission.
OpenAI Brokers Military AI Agreement While Anthropic Stands Alone Against White House Pressure
30