🚨 Cybersecurity Alert: The New Frontiers of Voice Cloning 🚨 OpenAI has made headlines with its latest development in voice synthesis technology: Voice Engine. Capable of cloning a voice from just 15 seconds of audio, this technological feat opens the door to remarkable advancements but also raises crucial questions about cybersecurity... (a new tool easily accessible to any novice hacker?) The potential of this technology to improve accessibility and global communication is immense. However, the implications in terms of cyber risks are equally significant. Voice cloning can indeed facilitate sophisticated scams (such as CEO fraud...), identity theft, or even compromise security systems based on voice recognition (this may question the reliability of voice-based biometric authentication). With this in mind, it's essential to raise awareness and engage our professional community about the ethical and security implications of such innovations. The recent article published by Ars Technica (mentioned in the Wired article) highlights not only the capabilities of Voice Engine but also the precautions that OpenAI is taking to navigate this complex landscape. In short, a must-read :) https://lnkd.in/eW4CF8A7 #IA #Cyber #DevoteamCybertrust #Cybertrust #OpenAI #AI #Risk #Riskmanagement #WIRED #Threats #Devoteam CC: WIRED, Devoteam Cyber Trust
Benoît MICAUD’s Post
More Relevant Posts
-
CEO / Founder CodeEye | Cyber Security Market Advisor looking for merger and acquisition opportunities
The recent OpenAI data breach serves as a critical wake-up call for the tech industry. As we advance further into the era of artificial intelligence, the importance of robust cybersecurity cannot be overstated. This incident underscores the need for comprehensive security frameworks that not only protect sensitive data but also maintain public trust. The breach reveals potential gaps in our current security measures and highlights the urgency for continuous improvement. It's not just about patching vulnerabilities after the fact, but proactively anticipating and mitigating risks. This proactive approach must be embedded in the development and deployment of AI technologies from the ground up. Moreover, transparency is key. Organizations must be open about their security practices and breaches, fostering a culture of accountability and learning. Sharing insights and strategies can help the entire industry improve its defenses and reduce the likelihood of future breaches. As leaders in tech, it's our responsibility to push for higher standards and to invest in the necessary resources to secure our digital landscape. The future of AI is bright, but it must be built on a foundation of trust and security. Let's use this incident as a catalyst for meaningful change and innovation in cybersecurity practices. #CyberSecurity #AI #TechLeadership #DataProtection #OpenAIBreach #businesstoday #irisaspm https://lnkd.in/gG43DSwA
Hacker breaches OpenAI's internal messaging systems, steals AI design details
businesstoday.in
To view or add a comment, sign in
-
Given this increased capability introduced by OpenAI, here is an important cybersecurity awareness tip. Voice authentication may now be vulnerable due to AI voice impersonation capabilities. You should consider disabling voice authentication immediately and updating cybersecurity protocols. Educate your organization and your family about the emerging AI risk of voice cloning and impersonation. #airisks #ai https://lnkd.in/gnBmEcp2
Navigating the Challenges and Opportunities of Synthetic Voices
openai.com
To view or add a comment, sign in
-
Rather important security finding by researchers: "Hackers can read private AI assistant chats even though they’re encrypted" The TL;DR is this... due to a quirk in the way GPTs (all but Google's Gemini) stream their answers back to end-users, hackers who are passively watching the (encrypted) streams can make assumptions about the content and can use LLMs themselves to decode the user/GPT conversations. Here's a very nice article with explanations by Ars Technica: https://lnkd.in/eFyjKG6b
Hackers can read private AI-assistant chats even though they’re encrypted
arstechnica.com
To view or add a comment, sign in
-
https://lnkd.in/dfN8pgpk PDF https://lnkd.in/dbPcQ-FC "A new report from #OpenAI has been released about the misuse of its AI services by malicious actors . In previous reports, OpenAI shared how it blocked accounts linked to state-sponsored hacker groups and identified information-psychological campaigns utilizing AI-generated content. This time, the report presents examples of AI being used in both cyber operations and information-psychological operations. Report highlights activities of three hacker groups: presumably pro-China SweetSpecter, and two pro-Iranian groups, CyberAv3ngers and STORM-0817. These groups used OpenAI services for various tasks: gathering vulnerability information, aiding in the development of malware, code obfuscation assistance, providing advice on post-compromise commands, social engineering, and more. Notably, OpenAI shows how the malicious actors’ activities align with tactics, techniques, and procedures (TTPs) related to the use of large language models (LLMs). However, OpenAI uses categories developed with Microsoft, instead of the more detailed MITRE ATLAS matrix (https://lnkd.in/dw--6AvX), which adapts the ATT&CK matrix for AI/ML attacks. Nonetheless, the differences are not substantial. See the image for an example of TTPs used by the SweetSpecter group. On the information-psychological front, the report includes several examples of using AI-generated content, ranging from short comments or posts on Twitter to long articles and even images. According to the report, this content was used for promoting politically charged messages, and in one case, for luring users to gambling websites. Interestingly, the geography of blocked accounts includes Russia, Rwanda, Israel, and even the USA — someone from America was allegedly conducting a pro-Azerbaijani information campaign. In broader terms, this report, like some others (https://lnkd.in/dwAtUUSi), shows that LLMs can significantly simplify hackers’ work and lower the entry barrier. However, they currently rely on advanced services from leading companies like OpenAI. As a result, malicious actors potentially expose their operations to the security teams of these services. While not all companies may handle this well, OpenAI, supported by Microsoft and others in the industry, is paying increasing attention to security. Therefore, APTs and advanced groups will likely recognize this risk and might avoid ChatGPT or use it only in limited situations. For regular users and companies, it’s worth remembering that their conversations with chatbots are likely stored somewhere, and both service employees and outsiders might gain access to them." Tags: #OpenAI #CyberOperations #AIAbuse #SweetSpecter #CyberAv3ngers #STORM0817 #InformationPsychologicalOperations #LLM #Microsoft #TTP #CyberSecurity #APT #MITREATLAS #AI
An update on disrupting deceptive uses of AI
openai.com
To view or add a comment, sign in
-
https://lnkd.in/erYQTe-A #artificialintelligence is either fascinating or terrifying - I can't tell just yet. As we marvel at its potential to revolutionize industry and enhance our lives, we also wrestle with the risks it poses when left unchecked. AI’s accessibility opens doors for both positive and negative actors. While AI promises efficiency and innovation, it also provides a medium for bad actors to acquire and project dangerous capabilities. Without proper regulation, we risk enabling harmful applications, from #misinformation campaigns to #cyber attacks.
Here Come the AI Worms
wired.com
To view or add a comment, sign in
-
Follow along as the McDonald Hopkins' Data Privacy and Cybersecurity team dive into the newest and most pressing information regarding #AI in their series Artificial Intelligence in Brief. #dataprivacy #McDonaldHopkins
Artificial Intelligence in Brief, Vol. 1: What is it? What is it for?
mcdonaldhopkins.com
To view or add a comment, sign in
-
What kind of conversations are you having with your AI assistant like Copilot or ChatGPT? Whatever you share, it’s fine, because they’re encrypted… aren’t they??? Well, it turns out, they might not be as private as we thought. Researchers have uncovered a loophole that cyber criminals can exploit to eavesdrop on our supposedly secure conversations. They identified a vulnerability in how these assistants transmit data, allowing attackers to peek into your virtual chats without breaking a sweat. The irony is they have to use AI to listen in to our conversations… with AI. But there's hope on the horizon. Security companies are already on the case, looking at ways to increase protection and keep your conversations private. This is a good time to look at all the security you have in place for your business. I can help with that, get in touch. https://lnkd.in/eRC7TbQy
Hackers can access your private, encrypted AI assistant chats
techspot.com
To view or add a comment, sign in
-
AI is here to stay. And, if you're not on the AI train yet, here's some thinking from Josiah Hagen, of Trend Micro of how to do it securely -- on a device, computer, app and more. Read more about the future AI security in this CNET article. Thanks, Bree Fowler! #cybersecurity #WWD #SecureAI #AI
Apple Faces a Tough Task in Keeping AI Data Secure and Private
cnet.com
To view or add a comment, sign in
-
OpenAI is collaborating with the US Defense Department on cutting-edge open-source cybersecurity tools. It is good to see a focus on election integrity and will be interesting to see how successful this is. They’re attempting to implement safeguards in AI content generation and enhancing transparency to empower voters in navigating the digital landscape. Addressing concerns about deepfakes aligns with industry standards, marking a pivotal step towards responsible AI use. #OpenAI #AIInnovation
What worries CEOs the most about generative AI
qz.com
To view or add a comment, sign in
More from this author
-
Key Takeaways from the Gartner Conference on Third-Party Cyber Risk Management
Benoît MICAUD 2w -
🚨 Cybersecurity Trends After the Gartner Security & Risk Management Summit in London: An Opportunity for Transformation 🚨
Benoît MICAUD 1mo -
Le Cybersecurity Act : l’ultime volet pour la sécurité du marché unique du numérique européen
Benoît MICAUD 5y