- The Pentagon and Anthropic are in conflict over the use of Claude
- AI model allegedly used to capture Nicolás Maduro
- Anthropic refuses to allow its models to be used in “fully autonomous weapons and mass domestic surveillance”
A divide has emerged between the Pentagon and several AI companies over how their models can be used in operations.
The Pentagon has asked AI vendors Anthropic, OpenAI, Google and xAI to allow their models to be used for “all lawful purposes.”
Anthropic has expressed fears that its Claude models could be used in autonomous weapons systems and mass domestic surveillance, with the Pentagon threatening to terminate its $200 million contract with the AI supplier in response.
A $200 million AI weapons standoff
Speaking to Axios, an anonymous advisor to the Trump administration said one of the companies had agreed to allow the Pentagon full use of its model, with the other two showing flexibility in how their AI models can be used.
The Pentagon’s relationship with Anthropic has been rocked since January over the use of its Claude models, with the Wall Street Journal reporting that Claude was used in the U.S. military operation to capture then-Venezuelan President Nicolás Maduro.
An Anthropic spokesperson told Axios that the company has “not discussed the use of Claude for specific operations with the Department of War.” The company said its usage policy with the Pentagon was under review, with specific reference to “our strict limits on fully autonomous weapons and mass domestic surveillance.”
Pentagon chief spokesperson Sean Parnell said, “Our nation demands that our partners be prepared to help our warfighters win any fight. »
Security experts, policymakers and Anthropic CEO Dario Amodei have called for greater regulation on AI development and increased protection requirements, with specific reference to the use of AI in weapons systems and military technology.
The best parental controls for every budget




