WASHINGTON, D.C. — President Donald Trump ordered every federal agency to immediately stop using Anthropic‘s AI tools on Friday, February 27, after the company refused a Pentagon ultimatum to remove safeguards on its Claude model, triggering a government-wide ban that defense officials acknowledged will take up to six months to fully implement, according to statements reviewed by reporters from the Department of War and Anthropic directly.
Within hours of the ban, OpenAI CEO Sam Altman announced a deal with the Pentagon that included the same two restrictions Anthropic had just been blacklisted for defending. The agreement prohibits domestic mass surveillance of American citizens and requires human oversight for any use of force involving autonomous weapons — the precise terms Defense Secretary Pete Hegseth had demanded Anthropic drop by a Friday deadline.
The contradiction was not subtle. Altman posted on X: “Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.”
Axios reported on March 1 that OpenAI‘s deal does not explicitly prohibit collection of Americans’ publicly available information — a fine-print gap that Anthropic‘s rejected terms reportedly addressed more directly — though the specifics of what distinguished the two proposals have not been disclosed by the Pentagon.
The ban itself may be less operational than it appears. Documents reviewed by reporters at The Wall Street Journal and The Washington Post confirmed that Palantir‘s Maven Smart System — which has integrated Claude since 2024 and runs extensively across U.S. military strike operations — continued active deployment against Iran during the 48 hours immediately following the ban announcement. Military leadership used Claude-powered systems for intelligence gathering, target selection, and battlefield simulation across more than 1,000 Iranian targets in a single 24-hour window.
Palantir‘s integration means removal is not a switch — it’s an engineering project. The Pentagon itself acknowledged a six-month phase-out timeline, a window long enough that Claude will remain embedded in active classified operations well into the second half of 2026.
The “supply chain risk” designation — formally issued by Hegseth and confirmed by a senior Department of War official to Bloomberg and CNBC — is unprecedented in its application to a domestic American AI firm. The label typically applies to companies with direct ties to foreign adversaries, such as Huawei. Under the designation, every defense contractor — including Lockheed Martin — must now certify it does not use Anthropic products in any Pentagon-related work.
Anthropic called the designation “legally unsound” and warned it sets a “dangerous precedent for any American company that negotiates with the government,” according to the company’s official statement on February 27. The company argued Hegseth lacks legal authority to extend the designation beyond direct Pentagon contractors.
No court filing has been made as of March 7, 2026.
Silicon Valley’s reaction split along commercial lines. Three major cloud providers — Amazon Web Services, Google Cloud, and Microsoft Azure — confirmed continued support for Anthropic‘s civilian-facing Claude models. The ban applies only to defense work, leaving a substantial commercial business intact. Claude became the most-downloaded free app on Apple‘s App Store in the days following the ban — surpassing ChatGPT.
Former Pentagon officials were less measured. At least one senior former defense official warned publicly that removing one of the best-integrated AI systems during active military operations carries its own national security risk — a concern that sits awkwardly alongside the administration’s stated justification.
The New York Times reported on March 1 that the Pentagon had also threatened to invoke the Defense Production Act to compel Anthropic to comply — a move that was later dropped, though no formal explanation was provided.
What touched this off: Anthropic‘s Claude model was used in a January U.S. operation in Caracas that led to the capture of Venezuelan President Nicolás Maduro. Anthropic flagged the operation as a violation of its usage policies, which prohibit Claude from being used for violent purposes, weapons development, or surveillance. The Pentagon viewed that objection as the line.
One thing reporters could not confirm: whether the Pentagon ever presented Anthropic with a formal written proposal distinguishing what it wanted from Anthropic versus what it accepted from OpenAI. Neither the Department of War nor OpenAI responded to questions on that point.
Anthropic has stated it will challenge the supply chain designation in court. No filing date has been confirmed. The six-month phase-out clock started February 27 — which puts the earliest complete removal of Claude from classified military systems at late August 2026, assuming the legal challenge does not freeze the process.

