- Security researchers find a way to abuse Llama Llama Llam for the execution of the remote code
- Meta approached the problem in early October 2024
- The problem was to use Pickle as a serialization format for Socket communication
Meta’s great language model (LLM) had a vulnerability that could have allowed threat actors to execute arbitrary code on the defective server, experts warned.
Oligo Security cybersecurity researchers published an in-depth analysis on a bug followed under the name of CVE-2024-50050, which, according to the National Vulnerability Database (NVD), proceeds a gravity score of 6.3 (medium).
The bug was discovered in a component called Llama Stack, designed to optimize the deployment, scaling and integration of large languages.
OLIGO has described the affected version as “vulnerable to the deretion of unreliable data, which means that an attacker can execute arbitrary code by sending malicious data”.
NVD describes the flaw
“Socket communication has been changed to use JSON instead,” he added.
The researchers made a little meta on the bug on September 24, and the company approached it on October 10, by pushing the 0.0.41 versions. The Hacker News Notes La Faille was also corrected in Pyzmq, a Python library which gives access to the Zeromq messaging library.
With the patch, Meta published a security notice in which he said to the community that he had corrected a risk of distant code execution associated with the use of pickles as a serialization format for socket communication. The solution was to go into JSON format.
Llama, or model of large Meta AI language is a series of large language models developed by the social media giant, Meta. These models are designed for natural language treatment tasks (NLP), such as text generation, summary, translation, etc.