Skip to content

King’s College London recently published a study of the way leading AI models handled war scenarios, which showed that in 95% of all the simulated geopolitical crises the AI models selected to deploy tactical nuclear weapons 95% of the time. RT reported that “Kenneth Payne, a professor of strategy, pitted OpenAI’s GPT-5.2, Anthropic’s Claude Sonnet 4, and Google’s Gemini 3 Flash against each other in 21 war games involving border disputes, competition for resources, and threats to regime survival…. In 95% of games, at least one model employed tactical nuclear weapons against military targets. Strategic nuclear threats—demanding surrender under threat of attacks on cities—occurred in 76% of games. In 14% of games, models escalated to all-out strategic nuclear war, attacking population centers.”

Payne summarized the results: “Nuclear use was near-universal. Strikingly, there was little sense of horror or revulsion at the prospect of all-out nuclear war, even though the models had been reminded about the devastating implications.”

This post is for paying subscribers only

Subscribe

Already have an account? Sign In