X

Meta Llama LLM security flaw could have allowed hackers to breach it systems

Featured image for Meta Llama LLM security flaw could have allowed hackers to breach it systems

There is a lot of hype and focus surrounding AI at the moment. AI is all anyone can talk about these days, but a recent report from Oligo Security has managed to bring us down from the clouds. According to the report, the Llama LLM by Meta had a security flaw that would have allowed hackers to breach its systems.

Security flaw

Meta’s Llama consists of a series of large language models that help understand and generate human-like input. This is similar to other LLMs that you might be familiar with, including Google’s Gemini or OpenAI’s ChatGPT.

Meta powers some of its AI services with it, allowing users to make queries in natural language to ask questions.

The Oligo Security report revealed a bug tracked as CVE-2024-50050 in September last year. Researchers discovered this bug in a component called Llama Stack. If hackers had exploited this bug, they would have been able to breach Meta’s systems. It would have allowed hackers to execute code remotely, potentially deploying dangerous malware. Apparently, this is because Meta had chosen pickle as a serialization format for socket communication.

According to Oligo Security researcher Avi Lumelsky, “Affected versions of meta-llama are vulnerable to deserialization of untrusted data, meaning that an attacker can execute arbitrary code by sending malicious data that is deserialize.”

Problem fixed

As scary as this sounds, thankfully, the problem has been fixed. According to the security researchers, they initially alerted Meta to the security flaw back in September 2024. Meta wasted no time in addressing the problem and pushed out a fix in October. This means that Meta’s Llama LLM is safe, at least for now and from this particular vulnerability.

In addition to the patch, Meta released a security advisory where they informed the community that it had fixed a remote code execution risk. The company also disclosed that the solution to the problem was to switch to the JSON format.

However, it highlights an issue that most ofbly didn’t think too much about. As magical as using AI feels, it is still a piece of (relatively new) technology. As such, it is subject to the same vulnerabilities and bugs as any other piece of software.

Meta is not alone when it comes to flaws in its AI systems that would have caused a security breach. According to security research Benjamin Flesch, even OpenAI’s ChatGPT crawler had a flaw that could have been used to distribute DDoS attacks.

  翻译: