A 'Vibe Battle' Shattered Pentagon's Tie with Anthropic

A Clash of Visions: Anthropic and the Pentagon
In a tense face-to-face meeting with Defense Secretary Pete Hegseth, Dario Amodei, CEO of AI company Anthropic, tried to highlight the risks associated with AI-controlled autonomous weapons. However, Hegseth was unimpressed, even by a CEO whose company had developed tools that were becoming vital for military operations.
“No CEO is going to tell our war fighters what they can and cannot do,” Hegseth said after cutting off Amodei mid-sentence in the meeting on Feb. 24, according to sources familiar with the conversation. This exchange marked the beginning of a growing rift between two individuals with vastly different personalities and perspectives.
This conflict has now escalated into a broader issue between the Trump administration, which has pushed for the rapid deployment of AI as crucial for economic growth and national security, and a major player in the AI industry. “This is a fight about vibes and personalities masquerading as a policy dispute,” said Michael Horowitz, a former Defense Department official involved in AI policy.
The core of the disagreement lies in a breakdown of trust between Anthropic and the Pentagon. According to Horowitz, Anthropic doesn’t trust the Pentagon’s ability to use its technology responsibly, while the Pentagon doesn’t trust Anthropic’s willingness to work on important use cases.
Amodei, who had previously assured employees that the company's contract with the U.S. military was mainly about paperwork, has since framed the conflict as having significant implications for modern warfare and society at large. On Friday, President Trump directed all federal agencies to stop working with Anthropic and criticized the company’s executives for being “leftwing nutjobs.”
Later that day, after a deadline passed for Anthropic to agree to a deal about how its tools could be used, Hegseth designated the company a supply-chain risk. This move, rarely applied to a U.S. company, could hinder Anthropic’s ability to work with other government contractors such as Lockheed Martin, Amazon, and Microsoft. It potentially threatens the business relationships that have made it one of the world’s most valuable startups.
In an ironic twist, minutes before his post, Trump authorized strikes on Iran—attacks that were planned with the involvement of Anthropic’s Claude models, as reported by The Wall Street Journal.

Claude also played a role in the January military operation that captured Venezuelan President Nicolás Maduro and has been used for war gaming and mission planning, according to people familiar with the matter.
For years, Anthropic has been one of the most vocal AI companies advocating for guardrails to ensure the technology is used safely. This stance has occasionally frustrated administration officials, who have embedded Anthropic’s tools widely across the government despite their concerns about the company wanting to exert control over how they are used.
Earlier this year, Anthropic effectively banned the use of the word “pathogen” in model prompts as part of its safeguards against AI creating a bioweapon on its unclassified systems used by many agencies, according to sources. This ban made it difficult for employees at the Centers for Disease Control and Prevention to use the AI tool, requiring weeks to get permission to circumvent the restriction.
Emil Michael, the undersecretary of defense for research and engineering, last week called Amodei a liar for mischaracterizing the Pentagon’s offer and accused him of trying to play God. One administration official noted that other tech CEOs such as Google’s Sundar Pichai or Amazon’s Andy Jassy would not tell the government how to use their technology and would have found a compromise. Another said government AI tools should be ideologically neutral.
By Monday, agencies including the Treasury Department and Department of Health and Human Services were telling employees that their AI tools would no longer work with Claude.
To critics, the moves represent the latest example of the administration strongarming a private company through methods more common in state-run economies. “The Trump administration is taking the Chinese playbook and coercing a U.S. company,” said Navtej Dhillon, former deputy director of the National Economic Council during the Biden administration.
At the core of the conflict is a novel question: who should ultimately control how cutting-edge AI tools are deployed in conflict and society at large?
Amodei and Hegseth approach the question differently. A bespectacled researcher who often twirls his curly hair, Amodei authors lengthy documents philosophizing about the importance of AI safety and is known for his deliberate approach to problem solving. He has been a vegetarian since childhood.
Hegseth is a former Fox News host with several tattoos tied to his Christian faith and military service. Videos of Hegseth lifting weights frequently circulate on social media, and he played a role in President Trump’s decision to rename the Defense Department the Department of War.
As of Monday, the Pentagon hadn’t formally issued the designation against Anthropic, raising the possibility that a deal could be reached.

In recent days, as Anthropic’s clash with the Pentagon intensified, it lost its status as the only AI company approved for use in classified settings. Elon Musk’s xAI recently reached an agreement to be deployed in such settings, and late on Friday, OpenAI announced that it had as well.
The Anthropic fight was never personal and was always about the Defense Department wanting to use its AI tools for all lawful purposes, a Pentagon official said.
Professor Panda
Amodei co-founded Anthropic in 2021 after leaving OpenAI because he felt the company was prioritizing business goals over AI safety. He is known to some of his employees as “Professor Panda.” Amodei and Anthropic’s co-founders committed to donating 80% of his founding stock to charity—a stake now worth billions of dollars.
Amodei chose not to release an early version of Claude in the summer of 2022, fearing that it would start a dangerous technology race. OpenAI released ChatGPT a few weeks later, forcing Anthropic to play catch-up.
While Amodei cemented a reputation for his methodical approach to AI development, Michael and Hegseth became known for their cutthroat approach to business and war. Michael helped build Uber as chief business officer when the company was known for aggressively taking on competitors and regulators. He went on to work with dozens of startups and championed efforts to integrate technology into Pentagon operations.
Michael had a long relationship with OpenAI’s Sam Altman, helping him sell his first startup in 2012. They also worked in the same startup ecosystem while Altman was leading incubator Y Combinator from 2014 to 2019.
While OpenAI pulled ahead with consumers, Anthropic’s Claude tool developed a devout following among coders. It found success nabbing enterprise contracts and raised capital at a blistering clip. It was valued at $380 billion after its most recent fundraising round.
Big investments from Amazon proved particularly beneficial—and became an entryway into the Pentagon. In November 2024, during the final days of the Biden administration, Anthropic and data-mining firm Palantir announced a partnership with Amazon that would give U.S. intelligence and defense agencies access to Claude models.
The partnership gave Anthropic a fast track to be used in classified settings through Palantir’s systems and made Anthropic the first model developer available for the most sensitive Pentagon operations.
Some Anthropic employees questioned how the technology would be used. Were there accountability mechanisms? Might the tools be used in operations in which people were killed?
Amodei reassured staff that the work was more mundane than their questions suggested. In a late 2024 all-hands meeting, he likened it to helping the government finish paperwork and back-office functions more quickly.
Even as Anthropic gained momentum on many fronts, it was rankling administration officials at the start of Trump’s second term.
Amodei’s dire public warnings on the dangers of AI and criticism of companies sending advanced AI chips to China made him one of the few AI executives out of step with Trump. In late May, Amodei warned that AI could destroy about half of all entry-level white-collar jobs.
Trump’s AI czar David Sacks called Anthropic “committed leftists” on the podcast he co-hosts, citing its links to organizations that are Democratic donors. Anthropic had previously hired several Biden-era officials. Amodei called Trump a “feudal warlord” before the 2024 election.
Still, in July Anthropic announced a contract worth up to $200 million from the Pentagon. It also inked an agreement with the central procurement arm of the federal government to let other agencies use Claude.
Around the same time, Sacks and other officials worked on an executive order targeting “woke AI,” a move that was widely seen as going after Anthropic.
The company’s work with the military was seen by some industry players as one way it could rebuff claims about being woke, which the company has called baseless.

Anthropic touted its Pentagon work at a September event at Washington’s Union Station. But Amodei again criticized the administration for allowing chips to be exported to countries that could pose security threats. He said there are some government officials “who don’t seem to get it, who still think this is an economic race to diffuse our technology to different parts of the world, and not an attempt to build the most powerful technology the world has ever seen.”
Late last year, the Pentagon began discussing changing its contracts with AI companies so that they could use the technology in all lawful use cases. Anthropic’s hesitance to give the blanket approval and desire to maintain red lines banning uses for mass domestic surveillance and autonomous weapons frustrated some administration officials, people familiar with the conversations said.
OpenAI’s Altman sees opportunity
The clash between Anthropic and the Pentagon intensified in January, with the Journal and other media outlets reporting that its contract could be canceled.
After the Venezuela raid, an Anthropic employee asked a Palantir counterpart how Claude was used. Defense Department officials found out and were upset, people familiar with the matter said. Anthropic has said it was a routine call between partners.
During a Jan. 12 speech at Musk’s SpaceX, Hegseth said Grok would join the Pentagon’s military AI platform. He made apparent jabs at Anthropic, saying, “We will not employ AI models that won’t allow you to fight wars.”
The Defense Department was negotiating but Anthropic continued to hold firm on its red lines. It wanted the bans explicit despite Pentagon assurances it wouldn’t conduct those operations or break the law.
Around the same time, media outlets reported that when Michael asked Amodei a hypothetical about whether the Pentagon could use Claude to take out missiles approaching the U.S., the Anthropic CEO said Defense Department officials should check with the company first. The response reportedly angered the Trump administration. Anthropic denied that Amodei said that.

Suspecting they were at an impasse, Pentagon officials accelerated discussions with Anthropic’s main rival. Michael reached out to Joe Larson, the head of government at OpenAI, to see if the company could begin the process of becoming certified to get deployed on classified systems, according to people familiar with the matter. Officials were already working to secure that status for Musk’s Grok.
As Anthropic’s relationship with the administration hit new lows, allies stepped in to broker a truce. Shyam Sankar, Palantir’s chief technology officer, suggested workarounds so Anthropic could accept the Pentagon’s terms while maintaining safeguards that were later accepted by rival OpenAI, people familiar with the matter said. Anthropic rejected the agreement.
At their Feb. 24 meeting at the Pentagon, Hegseth complimented Amodei on the quality of the company’s models while reiterating his threat to label Anthropic a supply-chain risk. He also lobbed an even bigger threat: invoking the Defense Production Act, a Cold War-era law that gives the government control of key industries, to force Anthropic to do its bidding. The defense secretary gave Amodei until 5:01 p.m. Friday to agree to the military’s right to use the technology in all lawful cases.
On Wednesday night, the Defense Department sent over new suggested contract language.
That day, OpenAI’s Altman reached out to Michael. Altman felt strongly that the risk of either triggering the Defense Production Act or designating Anthropic a supply-chain risk wasn’t good for the country, according to people familiar with their conversation.

But he also saw an opportunity for OpenAI. The company proposed a contract that used language from existing law to keep the guardrails against mass domestic surveillance and autonomous weapons, while not asking the Pentagon to alter its usage policy for the company, according to people familiar with the matter. OpenAI’s contract also included other measures, like deploying researchers with security clearance to monitor how the systems would be used.
OpenAI has a different political profile than Anthropic: The company has praised Trump’s tech strategy and promised investments to build data centers for training AI models. OpenAI President Greg Brockman and his wife donated $25 million to a Trump-aligned super political-action committee last year.
Missed deadline
On Thursday, Amodei reiterated the company’s red lines. “New language framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will,” a spokesman said.
Some in the Defense Department thought that the two sides were close to a deal before Amodei’s statement. Senators called for both sides to de-escalate the situation.
That day, Altman told staff that the company was working on a deal that might help solve the impasse.
As the Friday deadline approached, Trump said that he was directing federal agencies to cease working with Anthropic. But the parties were still negotiating.
At 5:01 p.m., Michael called Amodei, who didn’t answer. Michael then talked to another top Anthropic executive offering a deal that Anthropic felt would have left the door open to the collection or analysis of large amounts of data on U.S. residents, people familiar with the matter said.
Some inside Anthropic had thought an agreement was nearly done before that final proposal, which was rejected. Company officials had recently found out that they were in line to win a Pentagon contract to work on using AI to control drones but were out of the running because of the ongoing negotiations.
Michael has disputed the company’s characterization of the offer.
Moments later, Hegseth posted on social media that he was designating Anthropic a supply-chain risk.
What happens next isn’t clear, but the standoff appears to be boosting Anthropic’s popularity among consumers. By Sunday, Claude had topped ChatGPT to become the most downloaded app in Apple’s app store.