- Dell CEO Michael Dell answered a question about Anthropic during a forum
- The CEO said companies should not dictate how governments use their technology
- Dell added that this is not a “feasible model.”
Dell’s CEO said in a Bloomberg Television interview that companies doing business with the government cannot dictate how their technology is used.
Michael Dell added, “I just don’t think it’s a workable model,” when asked about Anthropic’s ongoing battle against the Pentagon’s designation of the company as a “supply chain risk.”
Speaking at a forum in Washington, the CEO did not mention Anthropic by name, and Dell added that his company had systems and controls in place to ensure sales were restricted to authorized users, but did not elaborate.
Article continues below
The anthropogenic battle
Defense Secretary Pete Hegseth recently called Anthropic a “supply chain risk” after the AI company refused to allow the U.S. government to use its Claude model for mass domestic surveillance and fully autonomous weapons systems.
This designation, combined with President Donald Trump issuing an executive order directing all government agencies to stop using Anthropic technology, led Anthropic to file two lawsuits against the U.S. government in an attempt to have the designation rescinded.
The supply chain risk designation is typically reserved for foreign companies at risk of exploitation by adversaries, with the most notable example being U.S. sanctions and designations against Huawei.
What happens next?
By calling Anthropic a supply chain risk, the Trump administration is setting a dangerous precedent. Either companies are forced to comply with the U.S. government’s desired use of a company’s product, as happened with OpenAI’s latest contract, or companies do not renew their contracts and the government purchases the technology from another company.
Those in the know will remember how Google ended its partnership with the US military after an internal petition garnered over 4,000 signatures regarding the company’s involvement in Project Maven. The project involved AI image recognition software developed by Google used for drone strikes in the Middle East.
Google opted to let its contract expire without renewal, and the U.S. government turned to other companies, including Palantir, Anduril, Amazon Web Services, and Anthropic, to fill the void.
Today, facing the consequences of the anthropogenic situation, nearly 1,000 Google and OpenAI employees have signed letters calling for clear limits on the military uses of AI. If these companies gave in to the demands of their employees, they could expose themselves to the wrath of the American government. On the other hand, they risk facing a mass exodus of employees if their demands are not met.
One outcome that the U.S. government may not have recognized in its dealings with Anthropic is that the companies may now be less willing to work alongside the U.S. Department of Defense, fearing that their technology could be used for purposes that their terms of service explicitly prohibit.
The best password manager for every budget




