- Meta I assigned unique identifiers to prompts and answers
- The servers did not verify who had rights of access to these identifiers
- Vulnerability was set at the end of January 2025
A bug that could have exposed the user’s prompts and AI answers to Meta’s artificial intelligence platform was corrected.
The bug came from the way Meta Ai awarded identifiers to prompts and responses.
It turns out that when a connected user tries to modify their previous prompt to obtain a different response, Meta allocates a unique identifier. By modifying this number, Meta servers would make the requests and results of someone else.
No abuse so far
The bug was discovered by a security researcher and founder of Appsecure, Sandeep Hodkasia, at the end of December 2024. He reported it to Meta, who deployed a fix on January 24, 2025, and paid a bonus of $ 10,000 for his problems.
Hodkasia said the rapid figures that Meta servers generate were easy to guess, but apparently – no threat actor thought about this before he was addressed.
This essentially means that Meta servers are not double verification if the user had an appropriate authorization to display the content.
This is clearly problematic in several ways, the most obvious being that many people share sensitive information with chatbots these days.
Trade documents, contracts and reports, personal information, all of these elements are downloaded from LLMs every day, and in many cases – people use AI tools as psychotherapists, sharing intimate life details and private revelations.
This information can be mistreated, among other things, in highly personalized phishing attacks, which could lead to a deployment of infostaler, identity theft or even ransomware.
For example, if a threat player knows that a person invited AI for cheap VPN solutions, they could send them an email offering a large profitable product, which is nothing more than a stolen door.
Via Techcrunch