Klaudia Krawiecka, PhD’s Post

View profile for Klaudia Krawiecka, PhD, graphic

💻 Meta Security Engineer | Oxford PhD Researcher in ML, Biometrics, IoT | Forbes Top 100 Women 2020 | Google & (ISC)2 Scholar | Co-lead Women@Meta London

In December, I had the privilege of co-organizing the Multi-Agent AI Security workshop at the #neurips2023 conference in New Orleans. Here are some of the key insights and experiences from this event: 🧠 The Rationale Behind the Workshop The rapid advancement of AI technologies, such as ChatGPT, promises significant economic and societal benefits. However, the impending deployment of these advanced AI systems also raises a host of concerns, ranging from robustness and fairness to more extreme scenarios like physical safety. The pace at which these systems are developing often outstrips the ability to incorporate essential security principles. Our workshop aimed to bridge the gap between the AI and Information Security communities, which currently lack sufficient interconnection to address both immediate and future threats. Our goal was to create a roadmap for the future of AI security through a series of expert discussions, industry practitioner interventions, keynote speeches, and contributed research content. 🗣 Panel Discussion I had the opportunity to lead a panel debate on AI security, safety, and ethics. The panel featured an engaging conversation with Sanja Šćepanović-Stojanović, PhD, Stephen McAleer, Adam Gleave, and Esben Kran. The discussion revolved around several key questions: 👉 How can we effectively incorporate cybersecurity principles into the foundational design of AI systems to ensure robust and resilient applications? 👉 What are the emerging cybersecurity threats specifically targeting AI systems, and how can we mitigate them? 👉 What role should government regulation play in guiding the integration of AI in cybersecurity, and how can policies be shaped to foster innovation while ensuring security? 👉 What are the potential long-term security and safety implications of unregulated AI-to-AI interactions, and how can regulators anticipate and address these consequences? 👩💻 👨💻 Diversifying the Workforce Another significant aspect of my participation in the workshop was the opportunity to engage in discussions about diversifying the workforce in the information security and AI fields. These conversations provided invaluable insights into the challenges and opportunities in this domain. The importance of diversity in these fields cannot be overstated, as it fosters innovation, enhances problem-solving capabilities, and ensures a broader perspective in the development and application of AI technologies. For those interested, the live recordings will be made available shortly. Stay tuned! 😎 The list of accepted papers: https://lnkd.in/ePd2K9gw Hawra Milani Swapneel Mehta Christian Schroeder de Witt Martin Strohmeier Carla Zoe Cremer

  • No alternative text description for this image
Christian Schroeder de Witt

Senior Research Associate in Machine Learning, University of Oxford Leading Foundational Research in Agentic AI ↔️ Multi-Agent Security: Mis/Disinformation, Privacy, Cybersecurity, and AI Safety.

9mo

This was a super exciting discussion. Thank you so much for moderating and guiding this panel Klaudia, I hope we can have you again in the future!!

Swapneel Mehta

Founder, Postdoc at BU and MIT

9mo

It was a fantastic experience working with you! Looking forward to the next workshop :D

Anna Łunkiewicz

Talent Manager | Talent Advisor

8mo

congrats!

See more comments

To view or add a comment, sign in

Explore topics