- Google employees sign open letter to CEO regarding concerns over military AI use
- AI developers don’t want their technology used for ‘classified purposes’
- Google is currently negotiating a contract with Pengaton
More than 600 Google employees have signed a letter calling on CEO Sundar Pichai to reject any use of its AI technology for military purposes.
The open letter highlights the staff’s serious ethical concerns, stating: “Human lives are already being lost and civil liberties are under threat at home and abroad due to the misuse of technology in which we play a key role in construction.” »
“As people working on AI, we know that these systems can centralize power and that they make mistakes,” the letter said. “We believe that our proximity to this technology imposes a responsibility on us to expose and prevent its most unethical and dangerous uses. »
Article continues below
Another designation for “supply chain risk”?
In March, Anthropic CEO Dario Amodei refused to allow the Pentagon to use the Claude model, fearing it would be used for “mass domestic surveillance” and “fully autonomous weapons,” leading Defense Secretary Pete Hegseth to declare the company a “supply chain risk.”
Shortly after, OpenAI stepped in to fill the void left by Anthropic, with CEO Sam Altman facing internal and external criticism over his apparent desire to authorize military use of ChatGPT.
The new OpenAI contract with the Pentagon was full of holes that would have allowed the same use of ChatGPT that Anthropic feared for Claude. The contract was amended to stipulate that OpenAI’s models would not be used for “deliberate tracking, surveillance, or control of persons or U.S. persons, including through the obtaining or use of commercially acquired personal or identifiable information.”
Shortly afterward, Sam Altman told his employees that the Pentagon had said OpenAI “cannot make operational decisions” about how the military uses AI technologies.
Today, Google employees join the growing number of AI company employees and members of the public opposed to the military use of AI tools. “Making the wrong choice now would cause irreparable damage to Google’s reputation, business, and role in the world,” the letter said.
Following protests involving Google staff in 2018, the company changed its AI principles to state that it would not deploy its AI tools where they were “likely to cause harm” and that it would not “design or deploy” AI tools for surveillance or weaponization. These clauses were quietly removed from its AI Principles on February 4, 2025.
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds.




