U.S. Government Agencies Ordered to Halt Use of Anthropic Services Amid AI Safeguard Dispute
In a significant development, President Donald Trump has mandated that all U.S. government agencies cease using Anthropic services, including its AI platform Claude. This decision intensifies an ongoing conflict between the Department of Defense and Anthropic over the implementation of adequate AI safeguards. The announcement was made via Truth Social, where Trump indicated a six-month transition period for federal agencies to disengage from Anthropic’s offerings.
Reasons Behind the Order
In his statement, Trump criticized Anthropic for allegedly attempting to impose its terms on the Department of Defense, stating, “The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War.” He further threatened that failure to cooperate during the phase-out could result in severe civil and criminal consequences for the company.
The friction escalated when Defense Secretary Pete Hegseth previously warned that Anthropic could be labeled a “supply chain risk” if it did not agree to modify its safeguards. These safeguards currently prevent the use of Claude for mass surveillance of American citizens or in fully autonomous weapon systems. In a statement following Trump’s announcement, Hegseth directed the Department of War to classify Anthropic as a supply chain risk to national security, effectively barring any contractors or suppliers working with the U.S. military from engaging in commercial activities with the company.
Anthropic’s Response
Anthropic has yet to issue a formal response to these recent developments. Earlier, a company spokesperson indicated that negotiations regarding the contract had made “virtually no progress” in preventing the potential misuse of their technology. They expressed concerns that new language presented as a compromise could allow for existing safeguards to be disregarded. The spokesperson reaffirmed Anthropic’s commitment to operational continuity for the Department of Defense and expressed a willingness to continue discussions.
Advocacy Concerns
The president’s actions have drawn criticism from advocacy groups. The Center for Democracy and Technology (CDT) voiced concerns that such measures could set a dangerous precedent, hindering the ability of private companies to engage openly with the government about the appropriate use of their technologies. CDT President Alexandra Givens stated, “These threats undermine the integrity of the innovation ecosystem and normalize an expansive view of executive power.”
Industry Solidarity
Despite the tension, the AI industry appears to be rallying behind Anthropic. Hundreds of employees from Google and OpenAI have signed an open letter expressing solidarity with the company. OpenAI CEO Sam Altman, in an internal memo, indicated that the organization would uphold the same principles as Anthropic regarding the use of AI technologies.
As this situation unfolds, it remains to be seen how the U.S. government and Anthropic will navigate this complex landscape of national security and technological innovation. The implications of this conflict could have far-reaching effects on the future of AI development and its governance in the United States.

