⭐ This Thursday, we have the privilege of hosting our Head of Product, Susannah Shattuck, and Martin Stanley, CISSP. They will share their expertise on the newly defined Generative AI risks and controls and guide us on how to approach comprehensive AI governance and risk management for enterprises of all sizes that wish to deploy #GenAI tools safely. Join this webinar to learn: 🔹 An overview of newly published GenAI governance documents, with a deep dive into National Institute of Standards and Technology (NIST) AI 600-1 🔹 How to apply GenAI controls to high-risk AI scenarios: high-risk AI industries and use case examples 🔹 Contextual AI governance: why you should apply controls and manage AI risk at the use-case level You won’t want to miss it! Register now: https://lnkd.in/dxUzV2Xg
Credo AI’s Post
More Relevant Posts
-
Going to be at #gsx2024 and wondering how you can use AI as a Security Risk Manager? I've got just the session for you.... 'The AI Advantage: Harnessing AI for Security Risk Management' on Wednesday. This is a deep-dive, practical session where I'll be getting into the weeds of how Security Risk Managers can identify use cases where AI will offer them a significant advantage. The intent is that you'll walk away with a framework for how to think about AI, how to identify opportunities for deployment, and how to address the human factor. (Plus, I'll be sharing the key takeaways from the paper AI Integrations In Enterprise Security Risk Management Douglas G. (@ Human Risks) and I just finished. Look out for more on that next week.) Room S210A, Wednesday morning, 9:45AM - 10:45AM - looking forward to seeing you there! #securityriskmanagement #ASIS
To view or add a comment, sign in
-
Join our panel of experts, Norm Barber, Liji Thomas, and Stephen Lazzara as they discuss a holistic approach to security and privacy risks management and mitigation associated with AI systems. Topics include: ✅ Understanding the new and unique risks that AI poses to organizations. ✅ How to look at these risks through the lens of traditional risk mitigation strategies. ✅ How traditional risk management approaches will need to adapt to the unique nature of AI. Reserve your seat today! https://lnkd.in/dCNkzN4J
To view or add a comment, sign in
-
It's so great to see this report come out. There is always a story behind every report, more than what is published, and this one is very special for me. This is a retroactive review of the pilot program I had the pleasure of leading while at the Responsible AI Institute. It looks at one of the first known independent third-party audits of an AI system. In our case it was an automated lending system. It looked at the efficacy of ISO/IEC 42001 and RAII's system level- certification would/ could work together. Where there are overlaps and how a complete system can be tested. It was a culmination of a lot of hard work from many incredible people who took a chance on a new and big idea. And while the results weren't perfect, it showed where there are areas for improvement and what is most important to ensure AI systems are developed in a safe and responsible manner. Following several years of developing the first assessment for AI systems, understanding and mapping how organizational and product standards work together, and then finding institutional and business partners who understood the vision to pilot a project this is one of the projects that I am most proud of in my career. While niche at the time (and probably to an extent still now) I do hope that the findings and lessons learned from this pilot will help to advance AI standards development in a meaningful way. I also appreciated that one of the lessons learned validated the pivot to my work at IAPP where we are focused on training and certifying AI governance professionals. One of the significant barriers for achieving compliance was ensuring all relevant staff were trained appropriately and in a timely manner. It was noted that "more guidance on the best practices of providing information for different roles in the organization would be helpful." I'm happy that I get to continue to build on this work in a different capacity. Huge huge thanks to everyone at Standards Council of Canada (SCC), ATB Financial, and in the AI standards ecosystem for the role you played in this. In particular Jacquelyn MacCoon Anneke Auer-Olvera Elias Rafoul William O'Neill Dan Semmens Yukun Z. Cathy Cobey, FCPA, FCA Var Shankar Alyssa Lefaivre Škopac Gabriel Bouffard Craig Shank Kim Lucy Yvonne Zhu Benjamin Faveri Graeme Auld Kasia Chmielinski Veronica Rotemberg and the late David Compton from UKAS. ❤
We’ve just published the AI accreditation lessons learned report, highlighting findings from our pilot of the ISO/IEC 42001 artificial intelligence management system (AIMS). Key takeaways: • How the AIMS standard supports AI risk management and governance. • The importance of assessing both AI management systems and AI products. • Insights into sector-specific challenges for AI governance, including third-party risks and compliance metrics. We’re now accepting expressions of interest for the pre-launch of our AIMS accreditation program. Read the full report here: https://ow.ly/IpnX50ThQNA
To view or add a comment, sign in
-
How prepared is the financial services industry for emerging Gen AI regulations? Please join me and two esteemed experts on AI risk management from NIST for a fireside chat on May 23rd. We will discuss the development of the AI RMF and how the financial services industry may adopt it for use. Register here: https://lnkd.in/eVgBGBRM
To view or add a comment, sign in
-
🔮Invicti's Predictive Risk Scoring is a great example of using #AI to help improve security. It already works really well to help you understand risks in your environment -- and will only get better over time. The webinar below is a great way to learn more about this new feature. I love what the team has delivered, and excited for the next launch! https://okt.to/CuTYgZ
To view or add a comment, sign in
-
How prepared is the financial services industry for emerging Gen AI regulations and risk management? Enterprises deploying Generative AI are still catching up on recent announcements from the National Institute of Standards and Technology (NIST), the creators of the widely used NIST AI Risk Management Framework (AI RMF). Just last week, NIST announced its program to assess generative AI technologies and four draft publications to improve the safety, security and trustworthiness of AI systems. Tune into Dynamo's fireside chat on May 23 @ 4PM ET with NIST's Principal Investigator for AI Bias, Reva S., and CISA's Strategic Technology Branch Chief, Martin Stanley. The discussion will be led by Daniel Ross, Dynamo AI's Head of AI Compliance Strategy. Register using the following link: https://lnkd.in/eVgBGBRM
To view or add a comment, sign in
-
We’ve just published the AI accreditation lessons learned report, highlighting findings from our pilot of the ISO/IEC 42001 artificial intelligence management system (AIMS). Key takeaways: • How the AIMS standard supports AI risk management and governance. • The importance of assessing both AI management systems and AI products. • Insights into sector-specific challenges for AI governance, including third-party risks and compliance metrics. We’re now accepting expressions of interest for the pre-launch of our AIMS accreditation program. Read the full report here: https://ow.ly/IpnX50ThQNA
To view or add a comment, sign in
-
Just finished "Leveraging AI for Governance, Risk, and Compliance" by Terra Cooke! Check it out: https://lnkd.in/gp_U6tVX #artificialintelligenceforbusiness, #governanceriskmanagementandcompliance.
Certificate of Completion
linkedin.com
To view or add a comment, sign in
-
Just finished "Leveraging AI for Governance, Risk, and Compliance" by Terra Cooke! Check it out: https://lnkd.in/dwfcykS2 #artificialintelligenceforbusiness, #governanceriskmanagementandcompliance.
Certificate of Completion
linkedin.com
To view or add a comment, sign in
-
Sysdig's secret power is runtime security. So of course when we think about AI, we think about how to help enterprises run AI workloads securely in production. Time to move all those AI proof-of-concepts into the real world!
🚀 Introducing: AI Workload Security for CNAPP! 🚀 Today, we are thrilled to announce the addition of AI Workload Security to our CNAPP, which offers real-time visibility and proactive risk management for AI environments. Now, Sysdig customers can pinpoint suspicious activity on workloads containing AI packages, prioritize active AI risks, and fix issues, fast. Check out our official launch blog to learn more about this exciting new feature! 👇
Accelerating AI Adoption: AI Workload Security for CNAPP | Sysdig
To view or add a comment, sign in
11,123 followers