Google is in advanced negotiations with the U.S. Department of Defense for a classified artificial intelligence agreement that would deploy its Gemini models in secure military environments, according to a report from The Information. The proposed contract language reportedly mirrors the terms of OpenAI's existing Pentagon deal almost word-for-word, including a broad "all lawful uses" clause that effectively overrides paper carve-outs against autonomous weapons and mass surveillance.
This marks a complete reversal from Google's 2018 decision to withdraw from Project Maven following employee protests. The negotiations signal a strategic shift as Google's Public Sector division targets $2 billion in defense bookings between 2025 and 2027.
The Three-Lab Landscape
The report outlines starkly different approaches among the three leading AI labs regarding military AI contracts:
OpenAI: The Established Precedent
OpenAI has already signed a contract with the Pentagon that includes the "all lawful uses" permission. While the agreement contains soft carve-outs against autonomous weapons and mass surveillance, the broad clause effectively supersedes them. CEO Sam Altman has reportedly asked the Pentagon to extend these same terms to every AI lab, establishing a de facto standard.
Google: Following the Template
Google is now negotiating its own agreement, proposing language that closely follows OpenAI's framework. The deal would include classified compute capacity and potential deployment of Google's custom TPU hardware in secure government environments. This follows Google's quiet removal of weapons and surveillance prohibitions from its AI principles in early 2025, which preceded a Public Sector sales kickoff. Over 200 Google employees signed a letter opposing military AI applications shortly after the principles change.
Anthropic: The Excluded Outlier
Anthropic has been frozen out of Pentagon negotiations after co-founder Dario Amodei refused to drop the company's safeguards against autonomous lethal weapons and domestic surveillance. The Pentagon declared Anthropic a "supply chain risk" in February 2025, and two lawsuits related to the exclusion are pending. This commercial exclusion creates a market gap that Google and OpenAI are positioned to fill.
Technical and Commercial Implications
The negotiations involve more than just model access. Google's proposal includes deploying its specialized Tensor Processing Unit (TPU) infrastructure in secure government environments, suggesting a deeper integration than simple API access. This would provide the Pentagon with dedicated, classified compute capacity running Google's hardware-software stack.
Google Public Sector's $2 billion defense booking target for 2025-2027 represents a significant revenue opportunity. Anthropic's exclusion from this market creates competitive space that Google appears eager to capture, particularly as defense AI spending accelerates.
Internal and External Reactions
The shift has generated internal tension at Google. Notably, former Google AI chief Jeff Dean—who signed an amicus brief supporting Anthropic's lawsuit against military AI restrictions—now works at a company (presumably Google) that is negotiating the very type of deal the brief argued against. This highlights the complex ethical positioning within the industry as commercial opportunities expand.
The employee letter signed by over 200 Google workers specifically opposed "weapons and surveillance" applications, indicating that internal dissent similar to the 2018 Project Maven protests may reemerge as negotiations progress.
gentic.news Analysis
This development represents a pivotal moment in the commercialization of frontier AI models. Google's reversal on military AI contracts completes a trajectory we've tracked since early 2025, when the company quietly amended its AI principles. The move aligns with Google Cloud's aggressive public sector push under Thomas Kurian, which we covered in our February 2025 analysis of cloud providers' government strategies.
The three-way divergence among OpenAI, Google, and Anthropic creates a new competitive axis in the AI industry: willingness to engage with defense applications. OpenAI's "all lawful uses" template has become the market standard that others must either accept or reject, with significant commercial consequences. Anthropic's exclusion demonstrates the tangible cost of maintaining stricter ethical boundaries—a dynamic we predicted in our December 2024 article on AI ethics as a competitive differentiator.
Notably, this shift occurs as defense agencies globally increase AI spending. The Pentagon's classification of Anthropic as a "supply chain risk" suggests national security considerations now explicitly include corporate ethical policies, creating a new dimension in vendor selection. Google's ability to offer both TPU hardware and Gemini models in classified environments gives it a technical advantage over pure software providers, potentially reshaping the defense AI vendor landscape.
Frequently Asked Questions
What is Google's proposed Pentagon AI deal?
Google is negotiating a classified agreement to deploy its Gemini AI models in secure Department of Defense environments. The deal would include dedicated TPU hardware infrastructure and follows contract terms similar to OpenAI's existing Pentagon agreement, including a broad "all lawful uses" clause.
How does this differ from Google's 2018 Project Maven position?
This represents a complete reversal. In 2018, Google withdrew from Project Maven (a Pentagon drone imagery analysis program) following significant employee protests and established AI principles restricting weapons work. The company has since removed those restrictions and is now actively pursuing defense contracts with fewer ethical constraints.
Why is Anthropic excluded from Pentagon contracts?
Anthropic has been declared a "supply chain risk" by the Pentagon after refusing to drop its safeguards against autonomous lethal weapons and domestic surveillance. This ethical stance has commercially excluded them from defense AI contracts, despite two pending lawsuits challenging the exclusion.
What are the implications of the "all lawful uses" clause?
The "all lawful uses" clause in OpenAI's (and potentially Google's) Pentagon contracts effectively overrides any paper protections against specific applications like autonomous weapons or mass surveillance. This gives the military broad latitude in deployment while allowing companies to claim they have ethical guidelines in place.









