Featured in the latest Starlet issue by #StarHistory! 🌟 Star History is the #1 GitHub star history graph on the web, and tracks trending open-source projects and their growth over time. In their latest issue, you'll discover how our open-source library for testing LLMs helps identifying risks, such as hallucinations or prompt injections. It saves developers time by automating the evaluation process and money by avoiding costly AI incidents. A big thank you to Star History for featuring us! 🙌 📖 Read the article: https://gisk.ar/4eIbT8t #LLMeval #opensource #GitHub #AITesting
Giskard
Développement de logiciels
The Testing platform for AI models
À propos
We provide a holistic testing platform for AI models, for enterprise teams to control all risks related to AI. We help customers save time by automating the evaluation & compliance process, and save money by avoiding costly AI incidents. We’re backed by founders of Hugging Face and Mistral AI ; and our solution is trusted by major enterprises including L’Oréal, Michelin and AXA. Need to control AI risks? Contact us today at giskard.ai/contact!
- Site web
-
https://www.giskard.ai/
Lien externe pour Giskard
- Secteur
- Développement de logiciels
- Taille de l’entreprise
- 11-50 employés
- Siège social
- Paris
- Type
- Société civile/Société commerciale/Autres types de sociétés
- Fondée en
- 2021
- Domaines
- AI, ML, Quality, Risk et LLM
Produits
Giskard | Testing platform for AI models
Logiciel de système de gestion de la qualité (QMS)
Protect your company against biases, performance issues & security vulnerabilities in AI models.
Lieux
-
Principal
Paris, FR
Employés chez Giskard
-
Simon Dawlat
CEO at Batch
-
Aurélien Barrière
💰 Financez vos projets innovants💡 On vous conseille pour débloquer votre Crédit Impôt Recherche et + 💪 +de 1000 clients 🎖️ Experts en financement…
-
Nick Hernandez
CEO, raised $200m to make corporate learning collaborative
-
Alex Combessie 🐢
Co-founder & CEO @ Giskard | Control AI Risks ⛑️
Nouvelles
-
🚨 Have you taken our free course on #RedTeaming LLM applications yet? 🏴☠️🚩 Developed in collaboration with DeepLearning.AI, is designed to help you ensure the safety and robustness of your LLM apps. You'll learn how to: ⬩ Identify vulnerabilities in your LLM apps ⬩ Conduct red teaming exercises to test your systems ⬩ Explore real-world prompt injection attacks on chatbots ⬩ Understand and address LLM failure modes ⬩ Gain hands-on experience with both manual and automated red teaming methods 👉 Enroll for free: https://gisk.ar/4bmAaOg #LLMs #RAG #AISafety #AIRedTeaming
-
Recognized as a top AI innovator in #MAD50! 🤯 Our CEO, Alex Combessie 🐢 has been recognized in MAD50 ranking! MAD50, organized by Maddyness and partnered with Banque Richelieu, spotlights 50 emerging leaders shaping the future of AI. This recognition underscores the importance of our work to build more responsible AI, by addressing quality, safety, and compliance risks. Thank you to MAD50, and congrats to all the other innovators recognized alongside us 🙌 Learn more about the ranking 🔗 https://gisk.ar/3XygUdB #ResponsibleAI #AISafety #AISecurity
-
🌟 Join us today in the open-source AI developer meetup organized by Docker, Inc! 🐳🐢 Alex Combessie 🐢, our CEO, we'll demo our open-source RAG toolkit, that can evaluate RAG agents automatically. It evaluates components such as the generator, retriever, rewriter, router, and knowledge base. Don't miss demos from Hugging Face, Weaviate, Koyeb, LlamaIndex, Amazon Web Services (AWS), Red Hat, Neo4j, GaiaNet, and, Cohere. 🇫🇷 STATION F in Paris 📆 20 June at 6 PM Join us: https://gisk.ar/3VypBSo #opensource #RAG #GenAI #AIdev
-
We're looking for a Senior Data Scientist to lead customer projects 🎉 In this role, you'll lead red teaming AI applications for customers, identifying vulnerabilities and providing guidance to clients on mitigating risks and optimizing their LLM apps. You'll collaborate closely with our R&D, sales, and marketing teams, supporting the sales pipeline and creating technical content. If you're seeking to join a dynamic company where you can grow and help build responsible AI, this is your chance! We can’t wait to meet you 🤩 Questions or thoughts? Feel free to comment below. Apply here 👉🏻 https://gisk.ar/4esZvc3 #DataScience #LLMs #AIRedTeaming #hiring
-
🚀 Missed our last webinar on #RedTeaming LLMs? The replay is now available! 🎥 Follow Alexandre Landeau's talk, and learn how to enhance the security of LLM applications using red-teaming techniques. Key takeaways: - Apply cybersecurity red-teaming to LLMs - Automatically detect vulnerabilities in LLM apps - Scale your security efforts Thank you to Open Data Science Conference (ODSC) for hosting this session! 👉 Watch the replay now: https://gisk.ar/45v31in #LLMs #AIsecurity #ODSC
-
Follow our talk at Open Data Science Conference (ODSC)! today! 🔥 Join Alexandre Landeau for his talk, "Secure LLM App Deployments—Strategies and Tactics," where he'll explain how to enhance the security of Large Language Models (LLMs). Discover how red-teaming techniques from cybersecurity can be applied to identify and evaluate vulnerabilities in LLM applications. Learn how Giskard's tools integrate into your workflow for automatic vulnerability detection, helping you scale your security efforts for Generative AI. A big thank you to ODSC for hosting this talk 🙌 📆 June 12, 2024 ⏰ 6PM (CET) - 12PM (ET) Register now for free: https://gisk.ar/3VmTJAg #RedTeaming #LLMs #AIsecurity #ODSC
-
We're hiring! 🚀 We're looking for a #MLEngineer Intern to join our team! In this role, you'll participate in red-teaming LLM applications for our customers, finding vulnerabilities and adding new evaluation methods. You'll also provide technical guidance to customers on building secure and robust LLM apps. If you are passionate about responsible AI, and want to join a fun team, apply here! 🐢 🔗 https://gisk.ar/3z6huEY #LLMs #AIRedTeaming #MachineLearning #hiring
-
✨ New integration with Databricks MLflow! ✨ This partnership brings together Giskard's LLM evaluation capabilities and MLflow's model management features. Databricks users can now automatically identify vulnerabilities on ML models and LLMs, generate domain-specific tests, and compare model performance across different versions. What's Giskard's open-source scan? It ensures the automatic identification of vulnerabilities in ML models and LLMs, such as hallucinations, reliability and robustness issues. Learn more about the integration 👉 https://lnkd.in/eNxnPM_5 #databricks #mlflow #LLMeval #MLOps
-
How LLMOps is different from MLOps? 🪄🤔 Short for Large Language Model Operations, LLMOps focuses on the deployment, management, and optimization of LLMs in production environments. In this article, you'll discover: ✅ The key differences between MLOps and LLMOps ✅ Challenges in productionizing LLM applications (and how to overcome them) ✅ Best practices for implementing LLMOps in your organization You'll gain insights into maximizing the performance, reliability, and cost-effectiveness of your LLM deployments. Link to the article 🔗 https://gisk.ar/4c8mwzi #LLMs #LLMOps #MLOps #MachineLearning