The ongoing battle between Anthropic and the US Pentagon or Department of Defense (DoD) is definitely a legacy-defining moment for Dario Amodei. In publicly standing up to the US government – a rare occurrence by anyone, much less the CEO of an AI company creating some of the best AI products out there – Anthropic’s Dario Amodei has personified principled leadership and ethical conviction like no other.
“We were the first company to put our models on the classified cloud. We were the first company to make custom models for [US] national security purposes. We’re deployed across the intelligence community and military for applications like cyber, combat support operations, various things like these,” Amodei said in an exclusive interview with CBS. This is all the context you need to admire Amodei for his actions even more.
Not OpenAI, not Google, not anyone else, it was Anthropic that won a $200 million contract in July 2025 to embed Claude and other custom AI models inside the US defence apparatus. At the time, Anthropic had ensured certain guardrails in terms of how the US Department of Defense or Department of War, specifically, would use their AI models. There were two key restrictions – no mass surveillance, no autonomous weapons development.
In the interview with CBS, Amodei explained how mass surveillance would be possible with the help of AI systems. “An example of this is something like taking data collected by private firms, having it bought by the government and analyzing it en masse by AI. That actually isn’t illegal,” Amodei said, highlighting how domestic mass surveillance could “get ahead of the law” because the technology is advancing so fast.
“Case number two is fully autonomous weapons. Not the partially autonomous weapons that are used in Ukraine or could potentially be used in Taiwan today,” Amodei explained in the interview. “This is the idea of making weapons that fire without any human involvement. We have some concerns about them. First, the AI systems of today are nowhere near reliable enough to make fully autonomous weapons. And there’s an oversight question too.”
Also read: Trump Anthropic ban effect: Pentagon turns to OpenAI to deploy AI, here’s what happened
If you have a large army of drones or robots that can operate without any human oversight, where there aren’t human soldiers to make the decisions about whom to target or shoot at, that is deeply concerning to Amodei and Anthropic. “We need to have a conversation about how that’s overseen. And we haven’t had that conversation yet. We feel strongly that those two use cases should not be allowed.”
Amodei specifically gave two examples on how he sees AI in warfare going horribly wrong. “One is around this idea of reliability, which is just it targets the wrong person, it shoots a civilian, it doesn’t show the judgment that a human soldier would show. We don’t want to sell something that we don’t think is reliable, and we don’t want to sell something that could get our own people killed or that could get innocent people killed.”
Second issue Amodei and Anthropic highlights is over the question of oversight in active combat decision making, which will be lost in autonomous weapons systems driven solely by AI. “If you think about it, human soldiers, there’s a whole chain of accountability that assumes a human uses their common sense,” he explained. “Maybe we need it at some point because our adversaries will have it, but we need to have a conversation about accountability, about who is holding the button and who can say no, and I think that’s very reasonable,” Amodei emphasised.
Then the interviewer asked a great question, whether Anthropic knows more about matters related to national security than the US Pentagon, US government itself. Why should anyone trust the CEO of a private company over elected officials of their own government? And Amodei’s response is full of conviction, revealing of his principled stand.
“Remember, this isn’t just about terms of use, it’s not just about what our model is legally allowed to do,” Amodei responds. “Our model has a personality. It’s capable of certain things. It’s able to do certain things reliably. It’s able to not do certain things reliably. And I think we are a good judge of what our models can do reliably and what they cannot do reliably. And I think we do have a good view into how this technology is getting ahead of the law.”
Amodei displayed long-term vision, as he continued his response. “I don’t think the right long-term solution is for a private company and the Pentagon to argue about this. I think Congress needs to act here. We are thinking about what Congress could do to impose some of these guardrails that don’t hinder our ability to defeat our adversaries, but that allow us to defeat our adversaries in a way that’s in line with the values of our country. But as you know, Congress doesn’t move fast. So I think in the meantime, we do need to draw a line in the sand.”
I don’t think I’ve ever heard of or seen any tech leader take such a principled stance on such a sensitive issue, butting heads with their own client – which is the US government no less – on the perceived dangers of their own product.
In his response towards the end of the interview, the CBS interviewer asked him if Anthropic can survive this feud with the US government. Amodei was confident without being cocky. “Not only survive it, we’re gonna be fine.” I think so too.
By refusing to concede to US government pressure, Dario Amodei and Anthropic have shown a rare blend of ethical conviction mixed with individual courage. Not just a principled leader, both Amodei and Anthropic have shown qualities that distinguish their approach from others in a rapidly evolving debate over AI ethics, safety and governance.
Also read: Anthropic lost the battle, OpenAI won the war?