Nook is a single-day, insider-threat-niche conference, Co-Founded by our CEO, Mia Temple ⚜️ and Advanced Onion, Inc.. This year the we will be discussing all the 'nooks and crannies' of LLMs, ML and AI as it relates to the InT/Insider Risk Management for public and private sectors. - FREE for Active Duty Govt - Student Discounts - Group Discounts - Earn 10 CPE Credits Call for speakers is now open and registration is open but limited! Register ASAP to join us on Nov. 6th in Monterey, CA 👌🏽 www.itsnookevents.com #insiderthreatevents #llmevent #threatprevention #corporateevents
Creative Strategist Empowering Others for Equitable Happiness + Positive Change. 💥 hummcreative.com #womenintech #eventcurator #creativeprof
[CALL FOR PRESENTERS] - Join us in Monterey, CA, November 6th, 2024 for the 2nd Nook Conference, which is hosted by Advanced Onion, Inc.. This year there is a focus on LLMs, ML and AI as it relates to our shared mission of mitigating internal threats. DM me for more info or if you are interested in participating in this conversation. We realize the importance of creating a platform for secure information sharing for practitioners and thought-leaders to share their best practices, challenges and insights so we may meet today's cyber challenges head on. YOU make our events what they are. We want to hear your fresh perspectives for today and our future, so let's bring it! #llm #insiderthreat #datasec #cybersec
-
+3
Co-Founder of Altrosyn and DIrector at CDTECH | Inventor | Manufacturer
1moFocusing on LLMs, ML, and AI within InT/Insider Risk Management is timely given the rise of "shadow IT" and autonomous threat actors leveraging these technologies. The conference's emphasis on "nooks and crannies" suggests a deep dive into nuanced attack vectors and defense strategies, perhaps exploring concepts like adversarial machine learning and explainable AI. Given Mia Temple's expertise in this domain, I wonder if the sessions will delve into the ethical implications of using AI for threat detection and mitigation, particularly concerning bias and potential for misuse? How might we effectively implement robust "red teaming" exercises to test the resilience of AI-powered security systems against sophisticated InT actors employing novel attack techniques?