By now we are all used to US President Donald Trump’s outbursts on Truth Social. But when that outburst came at Anthropic’s expense – the first AI company to infuse its Claude models into US Department of Defense workflows last year – with Sam Altman’s OpenAI benefiting down the line, that made it all the more interesting.
The feud between OpenAI’s Sam Altman and Anthropic’s Dario Amodei isn’t new. From Anthropic’s superbowl ads chastising OpenAI’s ChatGPT ads to Altman hitting back, and more recently Altman and Amodei refusing to shake hands at the India AI Impact Summit, it’s safe to say there’s no love lost between the two. In the matter related to the US Depart of Defense (DoD), it seems like Altman will have the last laugh over Amodei, where ChatGPT benefits at the expense of good old Claude.
To fully understand how Anthropic got publicly humiliated by Trump and DoD Secretary Hegseth over their non-compliance in supporting development of autonomous weapons and possible mass surveillance with the help of AI, and how OpenAI swooped in to “save the day” you will have to go back to mid-2025 where it all began.
Away from the rivalry between ChatGPT and Gemini, Anthropic has been quietly doing stellar work with Claude Code and Cowork (and more). In fact, to its credit, it became the first AI company to sign a contract with the US defence department. Signed in July 2025, the $200 million contract allowed Anthropic to integrate its models into US defence mission workflows on classified networks, according to reports.
At the time, the contract had usage restrictions – essential guardrails that prohibited applications of Anthropic’s AI for creating autonomous weapons and mass surveillance. And the US Pentagon had originally agreed to these restrictions, but tensions soon began to rise.
In January 2026, as Secretary of the DoD, Pete Hegseth asked all US Defense Department AI contracts to have legal language added that allowed the US Defense Department to deploy AI models without restrictions – of course, for “any lawful use” within 180 days. This new legal edit to the contract was in direct opposition with Anthropic’s restrictions.
By early February 2026, Pentagon officials told Anthropic about their concerns, how any company’s guardrails could stand in the way of critical actions at the time of war – like responding to a missile launched toward the United States. Actions that needed split-second decision making, sped up by the use of AI, of course. Anthropic, however, was committed in its belief to not allow its AI to be used for development of autonomous weapons systems.
On February 26, Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei a deadline of 5 pm on Friday, February 27 – to relent and allow unrestricted use of the company’s AI models “for all lawful purposes.” If not, Anthropic would be deemed a supply chain risk and be legally forced to comply under the US Defense Production Act.
In response, Anthropic CEO Amodei said his company won’t be intimidated. “These threats do not change our position: we cannot in good conscience accede to their request,” he wrote in his Thursday statement. This was primarily because Anthropic believed current AI models aren’t reliable for autonomous weapons deployments, and how mass domestic surveillance was in violation of US fundamental rights.
This is what led to President Trump ordering the US government to stop using Anthropic’s products with a Truth Social post on February 27. “The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War,” Trump wrote on Truth Social. “I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology.” Also Defense Secretary Hegseth said that he was labeling Anthropic a supply chain risk to national security, blacklisting it from working with the US military or contractors going forward.
In response, on February 27, Anthropic posted a statement saying it had “not yet received direct communication” from either the Pentagon or Trump. “We will challenge any supply chain risk designation in court,” showing no signs of backing down.
Of course, Anthropic’s non-compliance doesn’t help the US Department of Defense – especially at a time when Israel has opened pre-emptive strikes against Iran in the Middle East. It still needs a top AI company to step in and save the day. Thank heavens for OpenAI and Sam Altman, right?
OpenAI CEO Sam Altman said late Friday that his company had agreed to terms with the Department of Defense on use of its AI models. “Tonight, we reached an agreement with the Department of War to deploy our models in their classified network,” Altman wrote.
But here’s the irony, if you read Altman’s entire post. Altman wrote, “Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.”
In other words, OpenAI got the Pentagon to agree in writing to pretty much the exact same restrictions that Anthropic had been demanding all these months – and for which Anthropic got blacklisted.
Unlike Anthropic, OpenAI and Sam Altman had been more savvier in their discussions with top US Department of Defense officials, and had already allowed its AI models to be used by the DoD for “all lawful uses,” after months of internal deliberations. OpenAI was comfortable with this because so many safeguards were already built into its models, and also as Altman wrote, “In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.”
And that, in a nutshell, is the story so far. Having built its entire brand on AI safety and ethics, Anthropic lost out in partnering with the US Pentagon. In contrast, OpenAI swooped in to plug the hole left by Anthropic and earn the prestige of being the US Department of Defense’s classified AI partner.
Also read: Anthropic CEO Dario Amodei: India may benefit most from AI revolution