- Nearly a thousand Google and OpenAI employees have signed an open letter calling for clear limits on military uses of AI.
- The letter urges tech companies to oppose the government’s plans for AI surveillance and autonomous weapons.
- The move reflects growing tensions within the AI industry over government contracts and defense partnerships.
Nearly a thousand Google and OpenAI employees have signed an open letter urging their companies to resist pressure from the US military to ease restrictions on the use of AI systems. The letter states “We will not be divided” on the subject, even after the Pentagon designated Anthropic “a supply chain risk after the company refused to allow its technology to be used for domestic mass surveillance or for fully autonomous weapons.”
The move shocked many Silicon Valley observers and sparked a wave of concern among the engineers who build today’s AI models. Especially since OpenAI and Google are reportedly negotiating to take over the agreement rejected by Anthropic.
The signatories frame their message in unusually direct language for an industry known for careful corporate communication. The letter alleges that government officials are trying to pressure AI companies into abandoning certain ethical boundaries.
“They are trying to divide each company, lest the other give in. This strategy only works if none of us knows where the others stand,” the letter said. “This letter serves to create common understanding and solidarity in the face of this pressure from the War Department.”
The open letter is notable because it includes people from rival companies that normally compete fiercely with each other. The argument they make is that AI is now powerful enough that decisions about its use cannot be treated like routine business deals.
These concerns are not purely theoretical. Governments around the world are exploring how AI could be integrated into defense planning and intelligence analysis. Military agencies have long used software tools for surveillance and targeting. Advanced generative models could significantly accelerate these capabilities. And while studies are starting to show how much AI prefers the nuclear option in war games, letting it control weapons and surveillance systems seems like an even worse idea.
AI War
It’s a bit of a throwback for Google employees, thousands of whom protested the company’s involvement in the Pentagon’s Project Maven to use machine learning to analyze drone footage in 2018. After much internal backlash, Google ultimately let that contract expire and released a set of ethical guidelines known as the AI Principles.
These principles were intended to define how Google would approach sensitive uses of artificial intelligence. At the time, the company said it would not develop technologies designed to cause harm or enable surveillance that violate international standards. The latest open letter suggests that similar tensions are resurfacing as governments become more interested in deploying powerful language models.
The letter may or may not change the company’s decisions, but at least workers can view it as a message that cannot be misinterpreted.
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.
The best business laptops for every budget




