Sam Altman Admits 'Sloppy' Error in Pentagon Deal Amid Backlash

OpenAI CEO Admits to Missteps in Pentagon Deal

In a recent acknowledgment of missteps, Sam Altman, the CEO of OpenAI, admitted that the company was too hasty in announcing its new deal with the U.S. Department of Defense. Altman stated, “I think it just looked opportunistic and sloppy.” This admission came after significant backlash from users and industry observers.

Following the failure of Anthropic, the developer of the Claude chatbot, to reach an agreement with the U.S. government regarding the use of its technology in classified applications, OpenAI quickly announced its own arrangement with the Pentagon. However, this move has raised questions about transparency and communication.

The Complexity of Government Agreements

Altman acknowledged the complexity of the issues at hand and emphasized the need for clear communication. He mentioned that while the company aimed to de-escalate tensions and avoid a worse outcome, the approach appeared opportunistic and careless.

Anthropic had previously set itself apart as a principled player by refusing to allow its technology to be used for mass surveillance or autonomous weapons. This stance led to the Trump administration blacklisting Anthropic and labeling it a supply-chain risk, which could have serious financial consequences.

The details of OpenAI's deal with the U.S. government remain somewhat unclear. Altman stated that two of their most important safety principles include prohibitions on domestic mass surveillance and human responsibility for the use of force, including autonomous weapon systems. These principles were included in the Pentagon agreement, but the language raised questions about how OpenAI managed to secure carveouts that Anthropic did not.

Employee Concerns and Calls for Solidarity

In response to these developments, NotDivided.org published an open letter signed by nearly 800 Google employees and almost 100 ChatGPT employees. The letter called for solidarity between rival chatbot developers and criticized the government for trying to pit businesses against each other.

“We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War’s current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight,” the letter stated.

Altman has since promised an all-hands meeting for employees on Tuesday to address these concerns.

Backlash and Market Shifts

Some groups, such as QuitGPT, have called for a boycott of ChatGPT due to its new government collaboration. They have suggested Anthropic’s Claude as a more ethical alternative. According to Sensor Tower, there was a 295% increase in ChatGPT "uninstalls" from the previous day.

Claude has gained momentum, becoming the top free app in Apple’s App Store over the weekend and maintaining that position as of Tuesday morning. However, it experienced a brief outage early Monday, which Anthropic attributed to "unprecedented" demand.

Changes to the Government Deal

Altman announced in his Monday post, which was also shared with employees, that OpenAI plans to amend its government deal to include new language. This would add a line stating: “Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.”

Altman had previously criticized the U.S. government for labeling Anthropic a supply-chain risk. He added late Monday that he hopes the Pentagon offers them the same terms that OpenAI has agreed to.

The Defense Department has not yet responded to requests for comment.