- Anthropic CEO Dario Amodei doesn’t want Claude used by the Pentagon for mass domestic surveillance and autonomous weapons
- A statement laid bare why Anthropic kept Claude’s guardrails
- Pete Hegseth gave Anthropic until Friday to provide DoD with full access
Anthropic CEO Dario Amodei released a statement regarding the ongoing disagreement between the company and the US Department of Defense.
Amodei said Anthropic “cannot, in good conscience, comply” with the DoD’s request to provide full access to its AI models, fearing they could be used for “mass domestic surveillance” and “fully autonomous weapons.”
US Defense Secretary Pete Hegseth threatened to label Anthropic a “supply chain risk” and invoke the Defense Production Act to force the company into compliance.
Unprecedented threats against Anthropic
In his statement, Amodei said Anthropic has historically enjoyed a very good relationship with the US government, including being the first AI company to deploy its models within US government networks, national laboratories, and the first to deploy models for national security.
Amodei also noted that the company complied with U.S. regulations on the use and sale of AI models to China, to the extent that it chose to “forgo several hundred million dollars in revenue” by preventing Claude’s use by the Chinese Communist Party.
“Anthropic understands that the War Department, not private companies, makes military decisions,” Amodei continued. “However, in a limited number of cases, we believe that AI can undermine, rather than uphold, democratic values. »
But hesitation to provide the DoD with full access to Claude surrounds the potential misuse of the model for two nefarious purposes.
Regulations surrounding AI have not caught up with the capabilities of AI models such as Claude, Amodei says, which would allow the U.S. government to deploy Claude as a tool for mass national surveillance.
Theoretically, the government could purchase highly detailed records and use AI models to organize them in a way that highly accurately reflects U.S. citizens on a scale never before seen.
As for the use of AI in weapons systems, Amodei says they “could prove critical to our national defense,” but he says current AI models are “simply not reliable enough to power fully autonomous weapons.” If an AI model tasked with an autonomous weapons system were to suffer from a hallucination, the responsibility would likely fall on the model’s developer.
Amodei also addresses the threats made by Hegseth, stating that they “are inherently contradictory: one calls us a security risk; the other describes Claude as essential to national security.”
The statement concludes that “Anthropic’s strong preference is to continue serving the Department and our warfighters – with our two requested safeguards in place.”
“If the Department chooses to withdraw from Anthropic, we will work to enable a smooth transition to another supplier, avoiding any disruption to ongoing military planning, operations, or other critical missions. Our models will be available on the extended terms we have proposed for as long as necessary.”
The best identity theft protection for every budget




