The past week has been a fucking wild ride in the AI world. Anthropic held a $200 million contract with the Department of Defense, making Claude the first major AI model deployed in classified government networks. But when it came time to renew, Anthropic drew two firm lines: its technology would not be used for mass domestic surveillance of Americans or for fully autonomous weapons systems. I mean aren’t these two totally reasonable things? The Pentagon pushed back, talks broke down, and President Trump directed federal agencies to stop using Anthropic’s tools, while Defense Secretary Hegseth designated the company a “supply chain risk to national security” — an unprecedented move against an American company. And then, in a moment that felt almost choreographed, OpenAI swooped in hours later to announce it had secured the very Pentagon contract Anthropic had walked away from.
I want to be clear: I stand with Anthropic on this. What Anthropic did wasn’t stubbornness or arrogance…. it was integrity. Anthropic stated plainly that it does not believe today’s frontier AI models are reliable enough for use in fully autonomous weapons, and that mass domestic surveillance of Americans is a violation of fundamental rights. Those aren’t radical positions. They’re reasonable ones. OpenAI’s CEO Sam Altman even admitted he shares those same “red lines” — yet his company rushed through a deal anyway. Many OpenAI employees themselves were frustrated, with some saying they “really respect” Anthropic for standing its ground. Even Altman later admitted the deal was “definitely rushed” and that “the optics don’t look good.”
What worries me most isn’t just this one contract dispute — it’s what it signals. Every AI company, even those currently benefiting from Anthropic’s situation, should be unsettled by what happened, because the same treatment could come for any of them the next time a disagreement arises with a cabinet secretary. The good news is that Anthropic CEO Dario Amodei is reportedly back at the negotiating table with the Pentagon, and there’s still hope for a resolution. But regardless of how this ends, I have enormous respect for a company that was willing to risk billions rather than compromise on what it believes is right. That’s the kind of AI company I want building the technology that shapes our future.