Exclusive: OpenAI, Pentagon Strengthen AI Surveillance Safeguards

The Pentagon and OpenAI Adjust Their Contract Amid Concerns Over Domestic Surveillance
OpenAI and the Pentagon have agreed to modify their recently signed contract, following significant public backlash over concerns that domestic mass surveillance could still be a risk under the deal. Although the updated language has not been formally signed, sources familiar with the agreement have confirmed the changes.
Why This Matters
The initial deal between the Pentagon and Anthropic to use Claude for national security purposes sparked controversy, and the prospect of reaching an agreement with OpenAI seemed uncertain if concerns around mass domestic surveillance were not addressed. OpenAI CEO Sam Altman reached out to the undersecretary of Defense for research and engineering, Emil Michael, to rework the contract, according to sources.
The Big Picture
As negotiations with Anthropic fell apart, the Pentagon and OpenAI began exploring alternative options. Altman expressed similar concerns as Anthropic regarding domestic mass surveillance and autonomous weapons. Critics questioned whether civil liberties and safety would truly be protected under the agreement.
This prompted Altman to address thousands of questions directly on X, while the Pentagon launched a messaging campaign to reassure observers. They emphasized that they care about civil liberties and have no intention of spying on Americans. The department also stated that it would handle national security in accordance with laws and regulations, rather than being influenced by a private company.
A Pentagon official stated that the added language shows the issue was never about "mass surveillance" in the first place. "We are in the business of warfighting — giving our warfighters decision superiority is a mandate set by SecWar and POTUS," the official added.
What They're Saying
Altman reflected on his actions, stating, "One thing I think I did wrong: we shouldn't have rushed to get this out on Friday." He acknowledged the complexity of the issues and the need for clear communication. "We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy. Good learning experience for me as we face higher-stakes decisions in the future."
The revised language includes the following:
- "Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals."
- "For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information."
Additionally, Altman stated on X that the Pentagon has affirmed OpenAI's services will not be used by intelligence agencies like the National Security Agency. Any services to those agencies would require "a follow-on modification" to the contract.
Between the Lines
The amendment to the existing OpenAI-Pentagon contract explicitly references "commercially acquired" or public information. Previously, the contract only mentioned "private information." This change ensures that geolocation, web browsing data, or personal financial information purchased from data brokers would not be accessible.
Altman emphasized the importance of protecting American civil liberties, stating, "It's critical to protect the civil liberties of Americans, and there was so much focus on this, that we wanted to make this point especially clear, including around commercially acquired information." He added, "Just like everything we do with iterative deployment, we will continue to learn and refine as we go."
What We're Watching
As of Monday night, the Pentagon has not sent Anthropic a formal notice designating the company a "supply chain risk," as previously threatened. Altman continues to push for the same terms to be offered to the rival company.
"We will always come to the table for reasonable discussion as we did with OpenAI. Anthropic didn't want to do that, because they have their own personal vendettas," said a Pentagon official.
Editor's note: This story has been updated with new details throughout.