State Department adopts OpenAI as US agencies shift from Anthropic

U.S. Government Agencies Shift Away from Anthropic's AI Products
In a significant move, three U.S. cabinet-level agencies—the Departments of State, Treasury, and Health and Human Services (HHS)—have decided to stop using Anthropic's artificial intelligence (AI) products. This decision follows a directive from the White House, which has led these agencies to switch to competing platforms such as OpenAI. The shift is part of a broader effort by the U.S. government to distance itself from Anthropic, a company that had previously played a key role in advancing AI technologies critical to national security.
This action marks a major setback for Anthropic, a San Francisco-based AI startup that had been at the forefront of developing advanced language models like its chatbot platform, Claude. The federal government’s decision to phase out use of these products signals a growing concern over potential supply-chain risks, particularly in relation to national security.
President Trump's Directive and Agency Responses
President Donald Trump issued an order requiring all U.S. government agencies to gradually eliminate their use of Anthropic's services. The Defense Department had already labeled the company as a supply-chain risk, a designation that typically applies to entities considered dangerous or unreliable. This label could significantly impact Anthropic's standing within the industry, potentially isolating it from future government contracts.
Treasury Secretary Scott Bessent publicly announced on X that his department would terminate all use of Anthropic products, including Claude. Similarly, HHS sent a message to its employees encouraging them to switch to alternative AI platforms like ChatGPT and Gemini. While HHS did not immediately respond to requests for comment, the agency’s decision reflects a broader trend among government bodies to align with other AI providers.
The U.S. State Department also confirmed that it was transitioning the model powering its internal chatbot, StateChat, from Anthropic to OpenAI. According to a memo obtained by reporters, StateChat will now use GPT4.1 from OpenAI. A spokesperson for the State Department, Tommy Pigott, stated that the agency was taking immediate steps to comply with the president’s directive.
Expansion of the Boycott
On Monday, William Pulte, director of the Federal Housing Finance Agency, also announced that his bureau, along with mortgage agencies Fannie Mae and Freddie Mac, would cease using Anthropic products. This move further expands the scope of the government's boycott, indicating a coordinated effort across multiple federal agencies.
Earlier in the week, President Trump ordered a six-month phase-out for the Defense Department and other agencies that had been using Anthropic's technology. This decision comes amid ongoing tensions between the Trump administration and Anthropic over concerns about how the company's AI systems are deployed.
Concerns Over AI Safeguards
The conflict between the Trump administration and Anthropic centers around the safeguards in place to prevent the misuse of AI technology. Sources familiar with the negotiations have indicated that the administration has raised concerns about whether the military and intelligence agencies might use Anthropic's AI for autonomous weapons or domestic surveillance. These issues have led to disputes over who should control the deployment of AI technologies—government or industry.
In response to these concerns, OpenAI, a rival company backed by Microsoft and Amazon, recently announced a deal to deploy its technology within the Defense Department's classified network. CEO Sam Altman emphasized that the agreement would include clear restrictions on the use of AI for domestic surveillance of U.S. citizens. He stated that the Defense Department understood the limitation to "prohibit deliberate tracking, surveillance or monitoring of U.S. persons or nationals."
Broader Implications
The U.S. government's decision to shift away from Anthropic highlights the growing importance of AI in national security and the need for strict oversight. As the landscape of AI continues to evolve, the balance between innovation and regulation remains a critical challenge for policymakers and industry leaders alike. The actions taken by these agencies signal a shift in priorities, emphasizing the need for transparency and accountability in the use of AI technologies.