Friday Ultimatum: Pentagon Enforces Obedience

The US Department of Defense (DoD) has taken off the kid gloves. In a strategic escalation, Secretary of Defense Pete Hegseth is giving AI pioneer Anthropic an ultimatum — until 5:01 p.m. ET on Friday. The goal? The complete removal of technical restrictions for military applications.

The background to this story is explosive. Since the use of Claude in the arrest of Venezuelan president Nicolás Maduro in January, the technology has been considered decisive for war. The fact that Anthropic subsequently complained to its partner Palantir about the nature of its deployment has left a deep rift in the Pentagon.

“Secretary of War” Issues Ultimatum: The Pentagon Enforces Obedience

Secretary Hegseth is threatening CEO Dario Amodei with an unprecedented legal pinch. He is threatening to use the Cold War-era Defense Production Act (DPA) to legally force Anthropic to make available all of its models for “lawful” purposes. This also threatens to brand the company as a supply chain risk.

Such a move reveals an absurd paradox: the Pentagon wants to label Anthropic as a security risk (which normally means exclusion) while forcing them to cooperate by law simultaneously. Anthropic has a $200 million contract at stake; being labeled a security risk would be an economic death sentence for any government business partner.

“Red Lines” in the Crossfire

Amodei has stuck to two core prohibitions so far: Claude is not allowed to make autonomous kinetic decisions (AI decides on the fatal shot) and cannot be used for mass surveillance of US citizens. The company also admitted that the AI is not yet technically good enough. This means that it is unreliable — fatal even, if the AI is to decide over life and death situations in a war.

Hegseth and the Trump administration’s new AI commissioner, David Sacks, discredited these safety precautions as “woke AI”. A new executive order attempts to politically reinterpret and “deprogram” technical safety standards as ideological barriers.

The pressure on Anthropic is also intensified by the competition. While Amodei hesitates, OpenAI, xAI, and Google have already signaled that they will follow all lawful orders. Particularly interesting is Elon Musk’s Grok (xAI) receiving approval for classified networks this week. This strategic lever clearly sends this signal to Anthropic: you are replaceable.

Which is tragic, because Anthropic was the first company to be approved for use in classified military systems. This was because Claude was considered particularly secure and controllable. This monopoly is crumbling rapidly. It was only in July 2025 that the competitors were all awarded contracts worth up to $200 million — now they could outbid Anthropic.

A New Era: The End of Non-Profit AI

A brief classification of what I think this news means: we are experiencing the moment when the state finally breaks the autonomy of the tech industry. When national security is declared a political obstacle, the concept of responsible AI loses its basis. I have three thoughts on this:

  • Security is not a culture war: The reinterpretation of technical guardrails as “wokeness” by actors like Hegseth and Sacks is a dangerous weaponization. It is not about ideology at all, but about the stability and predictability of highly complex systems.
  • The death of the public benefit model: Anthropic was founded as a counter-proposal to OpenAI, incidentally by ex-OpenAI people like Amodei. If the company folds under the DPA, the idea of public benefit AI is effectively dead in the face of national security interests.
  • The export of unleashed AI: If the Pentagon wins this tussle, de-programmed, security-reduced models will become the new US standard. For Europe, this means we could soon be flooded with AI systems whose ethical airbags have been deactivated on Washington’s orders. In short, this means that you can demand anything from this AI in war. It doesn’t matter how brutal or controversial an order may be.

The outcome of this power struggle will define whether we retain control over autonomous systems — or whether we sacrifice safety for the promise of absolute military dominance.

The question remains: Will we still be able to control the ghosts we summoned once they have learned to shoot without a human veto? What do you think about this?