We are thrilled to announce that our paper, "Not My Voice! A Taxonomy of Ethical and Safety Harms of Speech Generators," has been accepted for presentation at the ACM Conference on Fairness, Accountability, and Transparency #ACMFAccT🎉 This research addresses the significant ethical and safety risks posed by AI-generated human speech, often used in malicious activities such as swatting, identity theft, and unauthorized voice use. By analyzing real-world incidents, we developed a conceptual framework and taxonomy to categorize these harms, providing a comprehensive understanding of how these risks arise and affect various stakeholders. Our groundbreaking methodology offers a relational approach, capturing the complexity of sociotechnical AI systems and supporting effective policy interventions. This work not only contributes to academic discourse but also aims to inform policy makers and technology developers in creating safer and more ethical AI systems. We extend our gratitude to our collaborators and the reviewers for their invaluable feedback. Join us at the ACM-FAccT Conference 2024 to delve deeper into our findings and discuss the future of ethical AI development. 🔗 https://bit.ly/3yK1kkG #AI #ACM2024 #EthicalAI #AIResearch #SpeechGeneration
SonyAI’s Post
More Relevant Posts
-
Voice clones and audio deepfakes have been all over the news in the last months. I started working on this topic a year ago, when I joined SonyAI for a summer internship. Proud to present our work on categorising harms from generated speech AI at #ACMFAccT today in Rio De Janeiro, with Orestis Papakyriakopoulos and Alice Xiang. In the paper we propose a conceptual framework for modelling pathways to ethical and safety harms of multimodal generative AI systems. We then use the conceptual framework to develop a taxonomy of harms of speech generators from reported AI incidents. What's the big idea? We position AI harms as being caused by responsible entities that create or deploy AI, and resulting in negative outcomes to affected entities. This relationship can be modelled as pathways to harm, i.e. causal chains of events that are required for a harm to be realised. We believe that such harm pathways offer a useful perspective to identify pionts of intervention for mitigating harms. The details of that, however, remain future research. You can read more about our work in this blog post: https://lnkd.in/dEF4Taim, or, if research is your thing, access the paper directly: https://lnkd.in/dfBmdySp
We are thrilled to announce that our paper, "Not My Voice! A Taxonomy of Ethical and Safety Harms of Speech Generators," has been accepted for presentation at the ACM Conference on Fairness, Accountability, and Transparency #ACMFAccT🎉 This research addresses the significant ethical and safety risks posed by AI-generated human speech, often used in malicious activities such as swatting, identity theft, and unauthorized voice use. By analyzing real-world incidents, we developed a conceptual framework and taxonomy to categorize these harms, providing a comprehensive understanding of how these risks arise and affect various stakeholders. Our groundbreaking methodology offers a relational approach, capturing the complexity of sociotechnical AI systems and supporting effective policy interventions. This work not only contributes to academic discourse but also aims to inform policy makers and technology developers in creating safer and more ethical AI systems. We extend our gratitude to our collaborators and the reviewers for their invaluable feedback. Join us at the ACM-FAccT Conference 2024 to delve deeper into our findings and discuss the future of ethical AI development. 🔗 https://bit.ly/3yK1kkG #AI #ACM2024 #EthicalAI #AIResearch #SpeechGeneration
To view or add a comment, sign in
-
🔍 In this #ResearchMonday's spotlight: a pivotal ICLR 2024 paper that reshapes our understanding of fine-tuning LLM. It reveals that fine-tuning may only superficially align models, without deeply altering their pre-trained capabilities. 🧠💻 This insight helps explain why fine-tuning can easily jailbreak an LLM, see https://buff.ly/47J1WTw. It's not just about the ease of the attack; it's about the fundamental nature of fine-tuning itself. The authors also express enthusiasm for future research aimed at not just masking but potentially deleting or unlearning certain pre-trained capabilities, enhancing the safety and reliability of AI systems. 🛡️🤖 A crucial read for those navigating the complex terrain of AI ethics and security. #AIethics #AISecurity The paper 📜: https://lnkd.in/euZjw3Zt The author's thread 🧵: https://buff.ly/3U9t0Ii
To view or add a comment, sign in
-
💥 On May 16th, Rui Afeiteira led an insightful session at IDC titled "The Power of Data," representing BI4ALL. 💡 For those who missed it, Rui delved into critical subjects such as AI, security, privacy, explainability, and ethics, showcasing BI4ALL's expertise. 🚀 Discover their innovative generative AI framework addressing these challenges and learn how to harness AI's potential for your organization. 💡 Key Takeaways: ✅ Security and Privacy: Essential for trust in AI ✅ Explainability: Understanding AI decisions ✅ Ethics and Data Quality: Avoiding biases and ensuring reliability ➡️ Watch the full session here: https://lnkd.in/dmAQjz9W #AI #DataScience #GenerativeAI #BI4ALL #ThePowerOfData #AIFramework #EthicalAI #ExplainableAI #YouTubeSession #TechInnovation
To view or add a comment, sign in
-
Senior Cyber Security Specialist @ Nestlé | Artificial Intelligence | AI Security | Deep Learning | Machine Learning | Network Security | Cyber Security | Linux | Python
𝐆𝐮𝐚𝐫𝐝𝐢𝐚𝐧𝐬 𝐨𝐟 𝐭𝐡𝐞 𝐅𝐮𝐭𝐮𝐫𝐞: 𝐓𝐡𝐞 𝐇𝐞𝐚𝐫𝐭 𝐨𝐟 𝐀𝐈-𝐃𝐫𝐢𝐯𝐞𝐧 𝐒𝐞𝐜𝐮𝐫𝐢𝐭𝐲 Ensuring bias mitigation and fairness in AI security systems is not just a technical necessity but a fundamental ethical obligation. In the realm of AI security, biases in algorithms can lead to disproportionate false positives and negatives, undermining the reliability of threat detection and response mechanisms. We’ve seen facial recognition systems in security applications misidentify individuals from certain demographic groups at alarmingly higher rates, raising serious ethical and legal concerns. Such biases compromise the efficacy of security measures and can erode public trust, potentially leading to violations of regulatory frameworks designed to protect individual rights. Fairness in AI security is equally critical to ensure equitable treatment across diverse populations. Without fairness, security measures can disproportionately target or overlook specific groups, affecting the system’s legitimacy and acceptance. By integrating advanced techniques like adversarial debiasing, algorithmic auditing, and fairness-aware machine learning models, we can develop AI systems that detect and correct biases. This not only enhances the robustness and inclusivity of security applications but also ensures compliance with ethical standards and legal requirements, fostering a more just and secure society. Let’s prioritize bias mitigation and fairness in our AI solutions to build trust and uphold ethical standards in the security domain. #AISecurity #EthicalAI #FairnessInAI #BiasMitigation
To view or add a comment, sign in
-
Artificial Intelligence (AI) holds immense potential, but we must tread carefully to ensure ethical and regulatory considerations. Here's what we need to keep in mind: 1️⃣ Fabricating Thought Leadership: Avoid the temptation to misuse AI to create false expertise or thought leadership. Integrity is crucial. 2️⃣ Ethical Responsibility: Safeguard sensitive information and be vigilant in protecting data privacy. Understand the fine print of AI, recognizing the input it requires to generate output. 3️⃣ AI as a Tool: Treat AI as a powerful tool in your arsenal, just like a hammer or a screwdriver. Use it to enhance your knowledge and capabilities, but don't solely rely on it or let it create information for you. Watch the full episode: https://lnkd.in/eQQgEMAa #AIethics #ResponsibleAI #EthicalTechnology #RVASmallBusinessNetwork #GrowBusinesssPodcast
The Ethics of Artificial Intelligence: A Powerful Tool with Boundaries
To view or add a comment, sign in
-
🛡️ 𝐀𝐬𝐬𝐮𝐫𝐢𝐧𝐠 𝐀𝐈 𝐒𝐞𝐜𝐮𝐫𝐢𝐭𝐲 & 𝐒𝐚𝐟𝐞𝐭𝐲 MITRE's latest guidance provides recommendations to the incoming administration on AI governance. Balancing tech advancement, ethics & public trust. This comprehensive approach focuses on key areas to ensure AI is regulated effectively: ✅Adversary Awareness ✅Sector-Specific Assurance ✅Transparency 📥Read more: https://lnkd.in/dnaMthjj At Lumenova AI, we specialize in helping organizations navigate AI regulations and implement secure, transparent AI practices. Partner with us to ensure your AI systems are compliant & trustworthy. #AI #AIRegulation #AIGovernance #MITRE #LumenovaAI #AISecurity #AISafety
To view or add a comment, sign in
-
If you worry about Big Tech, it's natural to be concerned about the implications of Big AI as well. Here are a few expectations or considerations you might have regarding Big AI: 1. Ethical and responsible use of AI. 2. Transparency and explainability of AI systems. 3. Accountability and regulation of AI technology. 4. Protection of data privacy and security. 5. Fairness and bias mitigation in AI decision-making. 6. Collaboration and inclusivity in AI development. 7. Empowerment and human augmentation through AI. These expectations revolve around ensuring ethical practices, transparency, accountability, fairness, privacy, inclusivity, and human-centric approaches in the development and deployment of Big AI. #EthicalAI #ResponsibleInnovation #TransparentTechnology #AccountableAI #PrivacyMatters #FairnessInAI #InclusiveFuture #HumanEmpowerment #CollaborativeDevelopment #RegulatingBigAI #DataProtection #BiasMitigation #AIResponsibility #EthicsInTechnology #SocietalImpact
To view or add a comment, sign in
-
🚀 Join the Discussion on "Balancing Innovation and Ethical Risks in AI" 🚀 🤖 Why This Matters: Innovation in AI is progressing at a lightning pace, bringing forth solutions and possibilities that were once mere science fiction. However, with great power comes great responsibility. 🔑 Key Points to Consider: 📌 Transparency in AI Systems: Understanding how AI makes decisions is key to building trust and accountability. 📌 Bias and Fairness: We must ensure AI systems do not perpetuate or amplify societal biases. 📌 Privacy and Data Security: Safeguarding personal data against misuse is a paramount concern in AI applications. 📌 Regulatory Frameworks: Collaborating to establish global standards and regulations that guide ethical AI development. I invite you to register for this free masterclass using the below link : 👉 https://lnkd.in/g4c7UKM3 Date: 26-Jan-24 Time: 8:30 pm (IST) #aiethics #ethicalai #responsibleai #aiforgood #databias #dataprivacy #datagovernance #aigovernance #airegulation
To view or add a comment, sign in
-
If you worry about Big Tech, it's natural to be concerned about the implications of Big AI as well. Here are a few expectations or considerations you might have regarding Big AI: 1. Ethical and responsible use of AI. 2. Transparency and explainability of AI systems. 3. Accountability and regulation of AI technology. 4. Protection of data privacy and security. 5. Fairness and bias mitigation in AI decision-making. 6. Collaboration and inclusivity in AI development. 7. Empowerment and human augmentation through AI. These expectations revolve around ensuring ethical practices, transparency, accountability, fairness, privacy, inclusivity, and human-centric approaches in the development and deployment of Big AI. #EthicalAI #ResponsibleInnovation #TransparentTechnology #AccountableAI #PrivacyMatters #FairnessInAI #InclusiveFuture #HumanEmpowerment #CollaborativeDevelopment #RegulatingBigAI #DataProtection #BiasMitigation #AIResponsibility #EthicsInTechnology #SocietalImpact
To view or add a comment, sign in
-
If you worry about Big Tech, it's natural to be concerned about the implications of Big AI as well. Here are a few expectations or considerations you might have regarding Big AI: 1. Ethical and responsible use of AI. 2. Transparency and explainability of AI systems. 3. Accountability and regulation of AI technology. 4. Protection of data privacy and security. 5. Fairness and bias mitigation in AI decision-making. 6. Collaboration and inclusivity in AI development. 7. Empowerment and human augmentation through AI. These expectations revolve around ensuring ethical practices, transparency, accountability, fairness, privacy, inclusivity, and human-centric approaches in the development and deployment of Big AI. #EthicalAI #ResponsibleInnovation #TransparentTechnology #AccountableAI #PrivacyMatters #FairnessInAI #InclusiveFuture #HumanEmpowerment #CollaborativeDevelopment #RegulatingBigAI #DataProtection #BiasMitigation #AIResponsibility #EthicsInTechnology #SocietalImpact
To view or add a comment, sign in
21,724 followers