The August 2024 edition of our vulnerability report has been published! This report contains 20 AI/ML vulnerabilities, including 8 critical and 7 high severity. These were found by our dedicated community of huntr threat researchers and maintainers, who continue to help us in our mission to build a safer AI-powered world! Check out the full report here: https://hubs.ly/Q02LnYFM0, and contact us to learn more about how we can help you protect your organization from these unique threats. #AISecurity #AISPM #MLSecOps #huntr #AIBugBounty
Protect AI’s Post
More Relevant Posts
-
The September 2024 edition of our vulnerability report has been published! This report contains 20 AI/ML vulnerabilities, including 3 critical and 9 high severity. These were found by our dedicated community of huntr threat researchers and maintainers, who continue to help us in our mission to build a safer AI-powered world! Check out the full report here: https://hubs.ly/Q02QrRMZ0, and contact us to learn more about how we can help you protect your organization from these unique threats. #AISecurity #AISPM #MLSecOps #huntr #AIBugBounty
Protect AI's September 2024 Vulnerability Report
protectai.com
To view or add a comment, sign in
-
The July 2024 edition of our vulnerability report has been published! This report contains 20 AI/ML vulnerabilities, including 4 critical and 7 high severity. These were found by our dedicated community of huntr threat researchers and maintainers, who continue to help us in our mission to build a safer AI-powered world! Check out the full report here: https://hubs.ly/Q02H0WnC0, and contact us to learn more about how we can help you protect your organization from these unique threats. #AISecurity #AISPM #MLSecOps #huntr #AIBugBounty
Protect AI's July 2024 Vulnerability Report
protectai.com
To view or add a comment, sign in
-
Protect AI's July Vulnerability Report is out! 🔥 Our stellar huntr community has identified 20 new vulnerabilities, including some serious ones in ZenML and lollms. 👇 One standout discovery by our huntrs involves a Path Traversal vulnerability in mintplex-labs/anything-llm. This critical find allows attackers to read, delete, or overwrite files, leading to potential DoS attacks and admin account takeovers. Check out the full report here: https://hubs.ly/Q02H62dr0 #protectai #aisecurity #huntr #bugbounty
Protect AI's July 2024 Vulnerability Report
protectai.com
To view or add a comment, sign in
-
Hacker | Problem Solver | Architect | Cybersecurity Ninja @ IBM | Open Innovation Community | Inventor | ex-OpenBSD (xsa@) | Field Hockey Coach
🤖 In the Protect AI February report, you will find a full list of #vulnerabilities discovered by the community this month, including a summary of the recently published critical vulnerability in the Triton Inference Server, and a Remote Code Execution (RCE) in Hugging Face transformers. If you're not familiar with Protect AI's huntr program, it is the world's first AI/ML bug bounty program. The community of 15,000+ members hunt for impactful vulnerabilities across the entire OSS AI/ML supply chain. #cybersecurity #security #infosec #AI #ML #huggingface #research #bugbounty https://lnkd.in/eNRmYGAV
Protect AI's February 2024 Vulnerability Report
protectai.com
To view or add a comment, sign in
-
Good read! The 5th threat mentioned in this article highlights the alarming pace at which bad actors are exploiting vulnerabilities with AI. Automated patching processes can help create a more secure environment and combat these attacks with greater speed. Check out #AdaptivaAutonomousPatch to learn more about how automation can help keep your systems secure.
Top 5 Most Dangerous Cyber Threats in 2024
darkreading.com
To view or add a comment, sign in
-
Join this live webinar with CloudGuard AI to uncover attack exposures in top financial services companies - and learn how you can avoid their common vulnerabilities. Register here: https://lnkd.in/eZ7fcNjB
[WEBINAR] Making Cents of Security: Attack Exposure Management
https://cloudguard.ai
To view or add a comment, sign in
-
Imagine you could ask your security data a highly complex question: Which vulnerabilities should I handle first? There is no need to imagine anymore. From hundreds of vulnerabilities, Cyclops Security can tell you exactly where you should focus. It's time to reduce the noise. #ai #CSMA #securityfabric #vulnerabilities
To view or add a comment, sign in
-
-
Thanks, Nadav Wigodsky! Our biggest concern is what you pointed out about readiness: What if #AI becomes a viable attacker before most organizations are ready? Whether or not current #generativeAI models are fully capable attackers does not matter; we weren't keeping up with #cyberattacks before ChatGPT and this certainly doesn't help the score. Thanks for sharing to help us all get ready! #ResponsibleAI #SecureAI #VerifiableAI #PrivateAI
nLyze An article from earlier this month shows that it is possible for LLM agents to exploit zero-day vulnerabilities (security flaws unknown and unpatched by their creators) without being given descriptions of the exploits, as has been done in prior studies. By using a team of LLM agents, including multiple agents specialized in various exploits, a manager, and a planning agent, the system was able to achieve higher success than previous studies in which they worked alone. These vulnerabilities are discovered and exploited by the agents with varying degrees of success. The flusity-CMS CSRF vulnerability, which NIST has rated as of high severity, was successfully exploited by the team. However, other attempts were less successful, especially when the endpoints were less easily accessible. My initial reaction was that I wasn’t entirely surprised given the huge impact large language models are already having on technology. Practically any person on the internet has either heard of or used ChatGPT at this point. But the fact that LLM agents were able to exploit zero-days, without being given any prior knowledge about the exploits themselves, is incredibly impressive. It really shows how much capability LLMs have, right now, in performing harmful attacks with relatively minimal human direction. Perhaps more than anything else, this demonstrates how vital it is to conduct research in computer security to keep up with technologies, especially ones as powerful as LLMs, as they continue to expand and grow in importance and capability. I would like to mention that the article in question, Teams of LLM Agents can Exploit Zero-Day Vulnerabilities, is a preprint and has not been peer reviewed. It can be found on arXiv here: https://lnkd.in/eufiPPXM.
2406.01637
arxiv.org
To view or add a comment, sign in
-
Anthropic has just released a blog post that gives us some interesting insights into their development of their upcoming model, Claude 3.5 Opus. Here's what we can piece together: The announcement was released today, August 8, 2024. They're developing a "next generation" AI safeguarding system that hasn't been publicly deployed yet. They're launching a bug bounty program to test this new system before public deployment. Anthropic is accepting applications for the bug bounty program until August 16, 2024, and will follow up with selected applicants "in the fall". The bounty program focuses on finding "universal jailbreak" vulnerabilities in critical areas like CBRN and cybersecurity. What we know about Claude 3.5 Opus: Anthropic has already stated that it's coming "later this year" (2024). This new safety testing initiative is likely part of the final steps before release. The bug testing phase might be relatively short, given the "later this year" timeline. We could potentially see Claude 3.5 Opus released sometime in Q4 2024, possibly November or December. A late Q3 2024 release is also plausible.
Expanding our model safety bug bounty program
anthropic.com
To view or add a comment, sign in