- AI-generated passwords follow patterns that hackers can study
- Surface complexity hides statistical predictability
- Entropy Gaps in AI Passwords Reveal Structural Weaknesses of AI Connections
Large Language Models (LLMs) can make passwords look complex, but recent testing suggests that these strings are far from random.
A study by Irregular looked at password results from AI systems like Claude, ChatGPT, and Gemini, asking everyone to generate 16-character passwords with symbols, numbers, and mixed-case letters.
At first glance, the results appeared strong and passed common online stress tests, with some auditors estimating that cracking them would take centuries, but a closer look at these passwords reveals a different story.
LLM passwords exhibit guessable repetitions and statistical patterns
When researchers analyzed 50 passwords generated in separate sessions, many were duplicates and several followed nearly identical structural patterns.
Most began and ended with similar character types, and none contained repetitive characters.
This lack of repetition may seem reassuring, but it actually indicates that the result follows learned conventions rather than true chance.
Using entropy calculations based on character statistics and pattern log probabilities, the researchers estimated that these AI-generated passwords contained approximately 20 to 27 bits of entropy.
A truly random 16 character password would typically be between 98 and 120 bits by the same methods.
The gap is considerable, and in practical terms it could mean that these passwords are vulnerable to brute force attacks within hours, even on outdated hardware.
Online password strength meters evaluate surface complexity, not statistical patterns hidden behind a string – and because they don’t take into account how AI tools generate text, they can classify predictable output as secure.
Attackers who understand these patterns could refine their detection strategies, significantly reducing the search space.
The study also found that similar sequences appear in public code repositories and documentation, suggesting that AI-generated passwords may already be circulating widely.
If developers rely on these results during testing or deployment, the risk increases over time. In fact, even the AI systems that generate these passwords don’t fully trust them and can issue warnings when pressed.
Gemini 3 Pro, for example, returned password suggestions accompanied by a warning that chat-generated credentials should not be used for sensitive accounts.
Instead, it recommends passphrases and advises users to rely on a dedicated password manager.
A password generator built into such tools relies on cryptographic randomness rather than linguistic prediction.
Simply put, LLMs are trained to produce plausible, repeatable text, not unpredictable sequences. The broader concern is therefore structural.
The design principles behind LLM-generated passwords conflict with secure authentication requirements, providing protection with a gap.
“People and coding officers should not rely on LLMs to generate passwords,” Irregular said.
“Passwords generated via LLM direct output are fundamentally weak, and this cannot be corrected by prompts or temperature adjustments: LLMs are optimized to produce predictable and plausible outputs, which is incompatible with generating secure passwords.”
Via The register
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.




