Advai

Advai

IT Services and IT Consulting

We don't make AI, we break it.

About us

We don’t make AI, we break it. Advai is the UK leader in 3rd party Assurance, benchmarking, and monitoring of AI systems. We enable AI Adoption by helping organisations manage the risk, compliance and trustworthiness of their AI systems. Our focus is to identify and mitigate the risks involved in AI adoption by discovering points of failure, in both internally developed and 3rd-party procured technologies. Our tooling mitigates AI-specific security vulnerabilities and ensures the reliability, robustness and trustworthiness of your AI and Generative AI systems. Advai’s monitoring platform aligns with your Governance frameworks, with technical metrics mapped to your risk and compliance needs. We have driven initiatives for the UK Government’s AI Safety Institute, the Ministry of Defence, and leading private sector companies.

Industry
IT Services and IT Consulting
Company size
11-50 employees
Headquarters
London
Type
Privately Held
Founded
2020
Specialties
Artificial Intelligence, Adversarial AI, Adversarial ML, Robust AI, AI Assurance, AI Compliance, and AI Risk Management

Locations

Employees at Advai

Updates

  • View organization page for Advai, graphic

    2,035 followers

    In lieu of standardisation, it is up to the present-day adopters of #AI systems to do their best to select the most appropriate assurance methods themselves. Here's an article about a few of our approaches, with some introductory commentary about the UK Government's drive to promote transparency across the #AISafety sector. Enjoy the article ☕

    A Look at Advai’s Assurance Techniques as Listed on CDEI

    A Look at Advai’s Assurance Techniques as Listed on CDEI

    Advai on LinkedIn

  • View organization page for Advai, graphic

    2,035 followers

    We applied to speak at DevSecCon because organisations are increasingly motivated to integrate AI into their services, yet embedding AI within applications introduces novel security and trust-related challenges. Join our very own Chris Jefferson, Advai's CTO, to learn about: 1) Planning for AI in your development lifecycle Learn how to strategically incorporate AI into your development processes while managing the unique risks it introduces. 2) Understanding AI failure modes Gain insights into identifying and addressing potential AI failure modes to ensure your AI models perform securely and reliably. 3) The importance of automated AI testing in DevSecOps pipelines Discover how automated testing for AI failure modes can act as critical gates in your DevSecOps pipelines. #AISecurity #DevSecOps #AITrust #AIrisks #AIFailureModes #AutomatedTesting #AIsafety DevSecCon Global Community

    View profile for Chris Jefferson, graphic

    Co-Founder & CTO at Advai

    📌 Excited to share that I'll be speaking at Dev Sec Con this October! 📌 Join me for a 15-minute session on the AI Security Track diving into "Integrating AI Safely: Automating AI Failure Mode Testing in DevSecOps Pipelines." As organizations embrace AI to drive efficiency, it's crucial to address the security and trust hurdles that come with it. Traditional testing approaches may not suffice, necessitating a focus on AI-specific failure modes and testing. Explore how automating AI model testing can elevate application security and dependability. Discover how tailored strategies and automated procedures can mitigate risks, ensuring a smooth AI integration without compromising system integrity. Can't wait to share my insights with you all! Find out more about the event here: https://lnkd.in/eH8Np-Ej #AI #DevSecOps #Cybersecurity

    • No alternative text description for this image
  • Advai reposted this

    View profile for David Sully, graphic

    Advai's CEO/Co-Founder: Enabling Safe, Secure AI Adoption.

    A recent report from RAND investigating the root causes of why AI Projects fails sounds hugely familiar to the Advai team and anyone that follows our posts. RAND's 5 suggestions for improving chances of successful adoption: 1. Ensure that technical staff understand the project purpose and domain context; 2. Choose enduring problems: AI projects require time and patience to complete; 3. Focus on the problem, not the technology (AI isn't a solution for every problem); 4. Invest in up-front investments in infrastructure to support data governance and model deployment; 5. "Understand AI’s limitations. Despite all the hype around AI as a technology, AI still has technical limitations that cannot always be overcome. When considering a potential AI project, leaders need to include technical experts to assess the project’s feasibility." I would argue that No.5 is the most important - getting the right people and technology to Test and Evaluate AI, and to ultimately Assure what is created, enables the other 4 by providing the evidence you need to understand where to focus effort. Full report can be found here: https://lnkd.in/e42xgTs7 James Ryseff, Brandon De Bruhl, MPP, MA,

    The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed: Avoiding the Anti-Patterns of AI

    The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed: Avoiding the Anti-Patterns of AI

    rand.org

  • View organization page for Advai, graphic

    2,035 followers

    David Sully explains why you need to break AI to identify and mitigate the risks involved in AI adoption. #AiSafety #AIAssurance #podcast

    View organization page for Atreides, graphic

    2,663 followers

    As the world comes to increasingly rely on AI systems for generating and handling information, we may become more beholden to systems that we simply don’t understand. David Sully, CEO of adversarial AI company Advai, chats with host Terry Pattar to explore how and why we need to ‘break’ AI to mitigate the potential risks, if we’re going to take advantage of all the potential upsides. Find out about the future of AI in the latest podcast from Atreides: https://lnkd.in/egXuVDTe #podcast #ai #technology Emilie de Rosenroll, ICD.D Jeffrey Spencer

    • No alternative text description for this image
  • View organization page for Advai, graphic

    2,035 followers

    The Alan Turing Institute takes a fascinating look at blockers to AI adoption in the criminal underworld! It's bizarre to consider that malicious actors are facing the same #AIAssurance blockers as regular organisations. Consistent blockers to genAI adoption - sound familiar? // #hallucinations // risks to operational #security // lack of reasoning abilities // #TrainingDataQuality // #SkillsShortage The paper looks at three anticipated applications, where #generativeAI may "uplift malicious actors' capabilities": 1) Malicious code generation (self-altering, strategic malware) 2) Radicalisation (autonomous agents promoting extremism) 3) Weapon instruction / attack planning (AI that aids and abets) For example, regarding malicious code generation, genAI might enable the following characteristics. --> Self-disguise, use of tooling, planning, persistence and cooperation, and situational awareness. The paper also outlines that human-machine teams are already in use. Criminals are also keeping #HumanInTheLoop to reduce risk and ensure quality! We'll leave you with a great quote from the concluding remarks, referring to their leveraging of #LawEnforcement expertise in creating this report: "No single discipline should define what risks are important, how they should be measured, and whether a system is safe."

    Evaluating Malicious Generative AI Capabilities

    Evaluating Malicious Generative AI Capabilities

    cetas.turing.ac.uk

  • View organization page for Advai, graphic

    2,035 followers

    Here are 7 summary insights from a webinar on red-teaming for responsible AI. #RedTeaming is a must-have approach for assuring AI systems and adopting AI responsibly. Huge thanks for Tess Buckley and techUK for inviting us to join the panel on "Red Teaming Techniques for Responsible AI". Chris Jefferson enjoyed the conversation and we would like to thank our co-panellists steve eyre from Alchemmy, Tessa Darbyshire from Accenture and Nicolas Krassas from . #AiSafety #AiAdoption

  • View organization page for Advai, graphic

    2,035 followers

    Did you read Mark Zuckerberg's view of the future of #AI? He believes it will and *should* be #OpenSource. Let's quickly touch on both sides of this #AISafety debate. Timed with the release of #Llama31, Mark published the view that "open source AI will be safer than the alternatives" primarily because "the systems are more transparent and can be widely scrutinized." That's absolutely true. While researchers like ours can perform 'black box' tests on closed-source models, one can perform much more intensive testing on open-source models. Some of our breakthroughs have been on optimising a single type of attack that works on multiple unrelated AI models. This is called a 'one-shot attack' and wouldn't have been possible without open-source AI. Open-sourcing models means their weaknesses are exposed to more researchers more quickly, implying effective mitigations can be found faster. This reduces what Mark calls 'unintentional harm'. Sounds great, right? But what about 'intentional harm'? On the other hand, releasing powerful, expensively trained artificial intelligence models into the world for free means anyone with the skillset, storage and GPUs can download, customise, and run these systems locally. Anyone. For example, this would make catching someone planning something nefarious through their search queries impossible. The FBI reported that #Trump's would be assassin looked up assassination-related information leading up to the event. This traceable behaviour is at least workable for #LawEnforcement, but it will become increasingly easy to conduct criminal research offline in the future. What about guardrails to prevent these queries? #Llamaguard? Sure, but these guardrails can be broken, their models 'jailbroken'. We jokingly refer to these as 'criminal instruction manuals' and have written about our work jailbreaking #GenAI models for the UK's AI Safety Institute before. Right now, these models are still pretty big, so the barrier to entry is high, but over time you can be 💯 sure these systems will be applied to more criminal enterprise. As an impartial evaluator of AI systems, we're here to break any AI we can get our hands on and take no side on this debate – we're just happy the conversation is happening. What do you think?

    • No alternative text description for this image
  • View organization page for Advai, graphic

    2,035 followers

    A few days ago, US Lawmakers reminded OpenAI of their pledge to dedicate 20% of its computing resources to research on #AISafety. Roughly one year ago, the #SuperAlignment team was announced to great excitement. Well, we were excited! "We need scientific and technical breakthroughs to steer and control AI systems much smarter than us," "To solve this problem within four years, we're starting a new team, co-led by Ilya Sutskever and Jan Leike, and dedicating 20% of the compute we've secured to date to this effort." It will be interesting to see how OpenAI respond, given Vox's recent headline indicating the initiative isn't doing too well... “I lost trust”: Why the OpenAI team in charge of safeguarding humanity imploded.

    • No alternative text description for this image

Similar pages

Browse jobs