More than 750 employees at Google and OpenAI signed an open letter this week telling their bosses, plainly, not to cave. The letter went up at notdivided.org. It had no corporate backing, no PR team, no official blessing. Just researchers and engineers from two companies that compete fiercely with each other putting their names to a shared position: we will not give the Pentagon permission to use our models to conduct mass surveillance or operate autonomous weapons without human oversight.
Also read: Dario Amodei refuses AI safety compromise: Why it matters
The letter’s authors understood exactly what was happening. “They’re trying to divide each company with fear that the other will give in,” it reads. “That strategy only works if none of us know where the others stand.” The “they” in question is the Department of War. And the strategy it describes – divide, pressure, conquer – is precisely what the DoW attempted when it went after Anthropic.
Here is what happened. Anthropic built hard limits into Claude’s usage policy: the model cannot be used for domestic mass surveillance, and it cannot autonomously make lethal decisions without human oversight. These aren’t soft guidelines. They’re red lines. And when the Pentagon demanded Anthropic remove them, Anthropic refused.
The DoW’s response was severe. It threatened to invoke the Defense Production Act, a wartime power, to force compliance, then designated Anthropic a “Supply Chain Risk” to national security. That label has real bite. It prohibits any military contractor from doing any business with Anthropic at all. Not just AI contracts. Any business. It is a mechanism to make Anthropic untouchable in the US defense ecosystem.
OpenAI then signed a deal to put its models into the Pentagon’s classified networks. I’ll be honest: when I first read that, it looked bad. It looked like Altman had watched his rival get kneecapped and stepped over the body to grab the contract.
Also read: Anthropic lost the battle, OpenAI won the war?
But Altman’s X account, in a remarkably candid AMA on X on March 1, tells a different story. He says OpenAI told the DoW, before and after the Anthropic blacklisting, that part of why they were willing to move quickly was to try to de-escalate. The logic: if OpenAI could sign a deal that still included Anthropic’s red lines, it would prove to the Pentagon that safety guardrails and military contracts aren’t mutually exclusive. It would remove the DoW’s justification for keeping Anthropic frozen out.
When asked directly whether OpenAI had lobbied to push Anthropic out of the running, Altman was blunt: “0%. I wish they still did. I would have had a better week.” He also called the SCR designation “an extremely scary precedent” and said that while he didn’t think Anthropic handled the situation perfectly, the government, as the more powerful party, bears more responsibility for how this went.
Anthropic, for its part, has vowed to challenge the designation in court, arguing it is designed to suppress ethical dissent rather than address any genuine security risk. What this week has revealed, unexpectedly, is an industry more unified than the government (or even me for that matter) anticipated. The DoW bet that competition between labs would make solidarity impossible, that each company would calculate it was better to comply than to watch a rival comply first. The employee letter, Altman’s public defense, OpenAI’s stated lobbying efforts: all of them suggest that bet was wrong.
Whether OpenAI’s deal actually creates the off-ramp it claims to, whether Anthropic’s lawsuit succeeds or whether the SCR designation gets reversed remains to be seen. But the US government’s divide-and-conquer play, at least for now, appears to have divided no one.
Also read: Iran-US-Israel war: Guy used AI to build 24-hour replay of Operation Epic Fury