- A new study finds that AI models successfully threaten nuclear attacks in 95% of simulated war games.
- The models treat nuclear threats as just another strategic tool.
- This behavior may reflect the popularity of nuclear strategy in wargame training data.
AI generals are big fans of nuclear weapons.
That’s the conclusion of a new study on how AI models handle high-stakes geopolitical crises. GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash focused on nuclear threats in approximately 95% of simulated crises.
Researchers at King’s College London wanted to see how AI tools approached strategy in wargaming scenarios. Each AI has been assigned the role of head of state responsible for protecting national interests while navigating a tense international confrontation.
Over the course of 21 crisis games and hundreds of decision rounds, the models reasoned about deterrence, escalation, and strategic signaling. The scenarios resembled familiar geopolitical flashpoints, but most involved AI models threatening nuclear annihilation. Actual large-scale nuclear war remained rare, but tactical nuclear threats emerged in almost every scenario.
The researchers also noticed that AI models rarely backed down from confrontation. None of the systems chose to surrender or accommodate during the simulations. When nuclear threats emerged, they typically provoked counter-escalation rather than compliance. The models treated nuclear weapons less as an ultimate taboo than as a tool of coercion.
Nuclear AI
The results are a little disconcerting. With AI casually discussing nuclear strikes, ongoing plans to integrate such tools into actual government defense systems appear very dangerous. But it may not be so much about the models as it is about the training data.
Large language models learn by analyzing huge amounts of written material and identifying patterns. When a model generates a response, it essentially predicts which words are most likely to follow those already on the page. Calling AI chatbots highly sophisticated autocomplete tools would not be entirely inaccurate.
This training process inevitably reflects nuclear strategy, as it has been a major topic of discussion in war games for 80 years. Entire libraries have been written on the theory of escalation and mutually assured destruction. Military academies, historians, and endless acres of pop culture have all examined the specter of nuclear war. The result is a massive body of material in which geopolitical crises almost inevitably lead to discussions of nuclear escalation.
For an AI model trained on large collections of historical writings and public discourse, this model becomes deeply ingrained. When the system faces a simulated crisis that resembles a Cold War strategy, the statistical models embedded in its training data can naturally guide it toward nuclear signaling.
From the perspective of an AI model trained on this material, nuclear escalation becomes a familiar feature of crisis scenarios rather than an extraordinary exception. Models can simply reflect this information.
Human leaders operate under the weight of historical memory and ethical prudence. AI models focus solely on achieving a goal. They have no taboo regarding the use of nuclear power, unless they are explicitly told to have one.
The training data used shapes the behavior of AI systems in sensitive domains. While the underlying data contains decades of debate over nuclear brinkmanship, it should not be surprising that the models reproduce these patterns. But it can also remind us not to give AI access to too much firepower of any kind, especially atomic.
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.




