- Microsoft Discovers Whisper Leak Reveals Privacy Flaws in Encrypted AI Systems
- Encrypted AI chats can still reveal clues about what users are discussing
- Attackers can track conversation topics using packet size and timing
Microsoft has revealed a new type of cyberattack called “Whisper Leak,” capable of revealing topics users discuss with AI chatbots, even when conversations are fully encrypted.
The company’s research suggests that attackers can study the size and timing of encrypted packets exchanged between a user and a large language model to infer what is being discussed.
“If a government agency or Internet service provider monitored traffic to a popular AI chatbot, they could reliably identify users asking questions about specific sensitive topics,” Microsoft said.
Whisper Flight Attacks
This means that “encrypted” does not necessarily mean invisible – the vulnerability lies in how LLMs send responses.
These patterns do not wait for a complete response, but pass data incrementally, creating small patterns that attackers can analyze.
Over time, as they collect more samples, these patterns become clearer, allowing more precise estimates about the nature of the conversations.
This technique does not decrypt messages directly, but exposes enough metadata to draw informed conclusions, which is arguably equally concerning.
Following Microsoft’s disclosure, OpenAI, Mistral and xAI all said they had moved quickly to deploy mitigation measures.
One solution adds a “random sequence of variable-length text” to each response, disrupting the consistency of token sizes that attackers rely on.
However, Microsoft advises users to avoid sensitive discussions on public Wi-Fi, using a VPN or sticking to non-streaming LLM models.
The results are accompanied by new tests showing that several open-weighted LLMs remain vulnerable to manipulation, particularly in multi-turn conversations.
Cisco AI Defense researchers found that even models built by large companies struggle to maintain security controls once the dialogue becomes complex.
Some models, they said, displayed “a systemic inability…to maintain safety guardrails during prolonged interactions.”
In 2024, reports revealed that an AI chatbot leaked more than 300,000 files containing personally identifiable information and hundreds of LLM servers were exposed, raising questions about how secure AI chat platforms actually are.
Traditional defenses, such as antivirus software or firewall protection, cannot detect or block secondary leaks like Whisper Leak, and these findings show that AI tools can unintentionally increase exposure to surveillance and data inference.
The best identity theft protection for every budget
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.




