Altman Concedes Defense Deal Sounded Opportunistic and Sloppy Amid Criticism

OpenAI's Revised Agreement with the Department of Defense

OpenAI CEO Sam Altman recently admitted that the company "shouldn't have rushed" its recent deal with the U.S. Department of Defense and outlined several revisions to the agreement. This admission came after a series of events that sparked controversy and public backlash.

Altman shared what he described as a repost of an internal memo on X, stating that the company would amend the contract to include new language regarding its principles on topics like surveillance. The revised agreement included specific wording to clarify that "the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals." It also added that "the Department understands the limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information."

The changes come in response to a broader debate about the ethical use of AI tools within government agencies. The agreement with OpenAI followed a federal ban on Anthropic's AI tools, which had been under scrutiny for its potential risks. The timing of the deal raised questions about why the Department of Defense chose to work with OpenAI rather than Anthropic.

According to Altman, the Defense Department affirmed that OpenAI's tools would not be used by intelligence agencies such as the NSA. He acknowledged that there are many things the technology is not yet ready for, and many areas where the tradeoffs required for safety are not fully understood. Altman added that the company would work with the Pentagon on technical safeguards to ensure responsible use of its AI systems.

In his post, Altman admitted that he had made a mistake and "shouldn't have rushed" to get the deal out on Friday. He explained that the company was trying to de-escalate tensions and avoid a worse outcome, but the move appeared opportunistic and sloppy. This admission comes after a public feud between Anthropic and Washington over safeguards for its Claude AI systems, which ended without an agreement. Defense Secretary Pete Hegseth designated Anthropic as a supply-chain threat.

Following an initial deal last year, Anthropic was the first AI lab to deploy its models across the Defense Department's classified network. The company later sought guarantees that its tools would not be used for purposes such as domestic surveillance in the U.S. or to operate and develop autonomous weapons without human control. The dispute began after it was revealed that Anthropic's Claude had been used by the U.S. military in its raid to capture Venezuelan president Nicolás Maduro in January, though the company did not publicly object to that use case.

OpenAI's deal with the Pentagon came right after talks between Anthropic and the Defense Department broke down, though Altman had told employees in a Thursday memo that OpenAI shared the same "red lines" as Anthropic. The company also said in a post on Friday that the Defense Department agreed to the company's restrictions. However, it remains unclear why the Defense Department agreed to accommodate OpenAI and not Anthropic, though government officials have for months criticized Anthropic for allegedly being overly concerned with AI safety.

The timing of OpenAI's deal with the Defense Department prompted online backlash, with many users reportedly ditching ChatGPT for Claude on app stores. In his post, Altman further addressed the controversy, saying: "In my conversations over the weekend, I reiterated that Anthropic should not be designated as a [supply chain risk], and that we hope the [Department of Defense] offers them the same terms we've agreed to."

Anthropic was founded in 2021 by a group of former OpenAI staff and researchers, who left the company after disagreements over its direction. The company has marketed itself as a "safety-first" alternative.