Adaptive ML

Adaptive ML

Technology, Information and Internet

AI, Tuned to Production.

About us

Continuously evaluate and adapt models with synthetic data and production feedback to surpass frontier performance—from your cloud or ours.

Industry
Technology, Information and Internet
Company size
11-50 employees
Headquarters
Paris
Type
Privately Held
Founded
2023
Specialties
Generative AI, Reinforcement Learning, Large Language Models, RLHF, RLAIF, Monitoring, A/B Testing, and Post-Training

Locations

Employees at Adaptive ML

Updates

  • View organization page for Adaptive ML, graphic

    2,819 followers

    What is 𝐏𝐫𝐨𝐱𝐢𝐦𝐚𝐥 𝐏𝐨𝐥𝐢𝐜𝐲 𝐎𝐩𝐭𝐢𝐦𝐢𝐳𝐚𝐭𝐢𝐨𝐧 (𝐏𝐏𝐎) and why does it seem so complicated? PPO requires four models. Importance sampling. Clipping. KL divergence. But, most importantly... ...why would I bother when there are other, simpler methods like 𝐬𝐮𝐩𝐞𝐫𝐯𝐢𝐬𝐞𝐝 𝐟𝐢𝐧𝐞-𝐭𝐮𝐧𝐢𝐧𝐠 (𝐒𝐅𝐓) for training my model? Our latest blog 𝐅𝐫𝐨𝐦 𝐙𝐞𝐫𝐨 𝐭𝐨 𝐏𝐏𝐎: 𝐔𝐧𝐝𝐞𝐫𝐬𝐭𝐚𝐧𝐝𝐢𝐧𝐠 𝐭𝐡𝐞 𝐏𝐚𝐭𝐡 𝐭𝐨 𝐇𝐞𝐥𝐩𝐟𝐮𝐥 𝐀𝐈 𝐌𝐨𝐝𝐞𝐥𝐬 builds an intuitive understanding of PPO and its differentiation from other tuning techniques: https://lnkd.in/eUSMbJYx

    • No alternative text description for this image
  • View organization page for Adaptive ML, graphic

    2,819 followers

    Why isn't supervised fine-tuning (𝐒𝐅𝐓) good enough to train a model for my use-case? 𝐒𝐅𝐓 𝐠𝐢𝐯𝐞𝐬 𝐚𝐧 𝐋𝐋𝐌 𝐚 𝐟𝐢𝐬𝐡; 𝐢𝐭 𝐝𝐨𝐞𝐬 𝐧𝐨𝐭 𝐭𝐞𝐚𝐜𝐡 𝐢𝐭 𝐭𝐨 𝐟𝐢𝐬𝐡. 🪝🐟 Wrestling helpful answers out of a pre-trained model isn't simple. With only a few words as context, it's hard for the model to judge how to continue appropriately. The adequate continuation might be a Shakespearean sonnet; a paragraph fit for your teenage blog; or a quick answer to a travel question. All of this, and more, are equally weighted in the pre-training corpus. Let’s fix that. Why not extend the simple next-word prediction of pre-training, but with data illustrative of the conversations we want the model to have? This is the basis of 𝐬𝐮𝐩𝐞𝐫𝐯𝐢𝐬𝐞𝐝 𝐟𝐢𝐧𝐞-𝐭𝐮𝐧𝐢𝐧𝐠 (𝐒𝐅𝐓). Training on this dataset will 𝐦𝐚𝐱𝐢𝐦𝐢𝐳𝐞 𝐭𝐡𝐞 𝐥𝐢𝐤𝐞𝐥𝐢𝐡𝐨𝐨𝐝 that, when faced with users’ questions, the model will answer adequately. After SFT, the model will adhere to the examples of our gold dataset, delivering similarly helpful answers. However, SFT has 𝐚 𝐩𝐚𝐭𝐡𝐨𝐥𝐨𝐠𝐢𝐜𝐚𝐥 𝐬𝐡𝐨𝐫𝐭𝐜𝐨𝐦𝐢𝐧𝐠 related to 𝐠𝐞𝐧𝐞𝐫𝐚𝐥𝐢𝐳𝐚𝐭𝐢𝐨𝐧. SFT provides specific demonstrations of the right answer to the model ex-nihilo: the model did not come up with them on its own. SFT gives the LLM a fish; it does not teach it to fish. Parroting gold answers could lead to poor generalization when the LLM is left to its own devices, 𝐫𝐞𝐬𝐮𝐥𝐭𝐢𝐧𝐠 𝐢𝐧 𝐡𝐚𝐥𝐥𝐮𝐜𝐢𝐧𝐚𝐭𝐢𝐨𝐧𝐬. Additionally, producing complete golden demonstrations may be costly and unscalable. A more effective training process would be for the LLM to suggest completions and learn through the evaluation of these completions instead. See how in our blog 𝐅𝐫𝐨𝐦 𝐙𝐞𝐫𝐨 𝐭𝐨 𝐏𝐏𝐎: 𝐔𝐧𝐝𝐞𝐫𝐬𝐭𝐚𝐧𝐝𝐢𝐧𝐠 𝐭𝐡𝐞 𝐏𝐚𝐭𝐡 𝐭𝐨 𝐇𝐞𝐥𝐩𝐟𝐮𝐥 𝐀𝐈 𝐌𝐨𝐝𝐞𝐥𝐬: https://lnkd.in/eUSMbJYx

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
  • View organization page for Adaptive ML, graphic

    2,819 followers

    Pretrained LLMs are 𝐚𝐥𝐢𝐞𝐧𝐬 𝐨𝐟 𝐞𝐱𝐭𝐫𝐚𝐨𝐫𝐝𝐢𝐧𝐚𝐫𝐲 𝐢𝐧𝐭𝐞𝐥𝐥𝐢𝐠𝐞𝐧𝐜𝐞, 𝐲𝐞𝐭 𝐥𝐢𝐭𝐭𝐥𝐞 𝐮𝐧𝐝𝐞𝐫𝐬𝐭𝐚𝐧𝐝𝐢𝐧𝐠. 👽 How do post-training techniques like 𝐒𝐅𝐓, 𝐑𝐄𝐈𝐍𝐅𝐎𝐑𝐂𝐄, and 𝐏𝐏𝐎 work in-tandem to turn these aliens into helpful AI assistants? Helpfulness is instilled in LLMs as a result of extensive 𝐩𝐨𝐬𝐭-𝐭𝐫𝐚𝐢𝐧𝐢𝐧𝐠. One approach in particular has been exceptionally successful: 𝐑𝐞𝐢𝐧𝐟𝐨𝐫𝐜𝐞𝐦𝐞𝐧𝐭 𝐋𝐞𝐚𝐫𝐧𝐢𝐧𝐠 𝐅𝐫𝐨𝐦 𝐇𝐮𝐦𝐚𝐧 𝐅𝐞𝐞𝐝𝐛𝐚𝐜𝐤 (𝐑𝐋𝐇𝐅). 𝐑𝐋𝐇𝐅 enables models to learn directly from human preferences, capturing rich, nuanced feedback rather than relying solely on specific gold examples. One of the de facto engines of RLHF has been 𝐏𝐫𝐨𝐱𝐢𝐦𝐚𝐥 𝐏𝐨𝐥𝐢𝐜𝐲 𝐎𝐩𝐭𝐢𝐦𝐢𝐳𝐚𝐭𝐢𝐨𝐧 (𝐏𝐏𝐎). Taken at face value, PPO is puzzling; when applied to LLMs, it involves no less than four different versions of the model interacting together (𝐩𝐨𝐥𝐢𝐜𝐲, 𝐯𝐚𝐥𝐮𝐞, 𝐫𝐞𝐰𝐚𝐫𝐝, and 𝐫𝐞𝐟𝐞𝐫𝐞𝐧𝐜𝐞), and is driven by an intricate loss function. In our blog, we build-up to PPO, starting from 𝐬𝐮𝐩𝐞𝐫𝐯𝐢𝐬𝐞𝐝 𝐟𝐢𝐧𝐞-𝐭𝐮𝐧𝐢𝐧𝐠 (𝐒𝐅𝐓). We connect the dots between 𝐫𝐞𝐣𝐞𝐜𝐭𝐢𝐨𝐧 𝐬𝐚𝐦𝐩𝐥𝐢𝐧𝐠, 𝐫𝐞𝐰𝐚𝐫𝐝 𝐦𝐨𝐝𝐞𝐥𝐬, 𝐑𝐄𝐈𝐍𝐅𝐎𝐑𝐂𝐄, and 𝐀𝐝𝐯𝐚𝐧𝐭𝐚𝐠𝐞 𝐀𝐜𝐭𝐨𝐫 𝐂𝐫𝐢𝐭𝐢𝐜, drawing a deeper understanding of how to tune LLMs to deliver helpful, harmless, and honest answers. Read 𝐅𝐫𝐨𝐦 𝐙𝐞𝐫𝐨 𝐭𝐨 𝐏𝐏𝐎: 𝐔𝐧𝐝𝐞𝐫𝐬𝐭𝐚𝐧𝐝𝐢𝐧𝐠 𝐭𝐡𝐞 𝐏𝐚𝐭𝐡 𝐭𝐨 𝐇𝐞𝐥𝐩𝐟𝐮𝐥 𝐀𝐈 𝐌𝐨𝐝𝐞𝐥𝐬: https://lnkd.in/eUSMbJYx

    • No alternative text description for this image
  • View organization page for Adaptive ML, graphic

    2,819 followers

    📣 At #VDS2024, Adaptive ML CTO Baptiste Pannier joined Ahmed Menshawy of Mastercard, Margarida Garcia of poolside, and Neema Balolebwami Nelly of NEEMA AI for a discussion on the challenges and rewards of getting GenAI into production. 📣 The conversation, moderated by Ruben Colomer Flos of Next Tier Ventures, centered on how organizations can leverage LLMs to foster innovation and enhance operational efficiency, while avoiding potential legal challenges. Baptiste highlighted the importance of keeping models in-tune with operations, continuously learning from production feedback to ensure constant performance. 🎉 Thanks to VDS for hosting and organizing a great event! 🎉

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
  • View organization page for Adaptive ML, graphic

    2,819 followers

    🥳 Please welcome the newest edition to Adaptive ML, Tugdual de Kerviler! Tug joins us as a 𝐅𝐫𝐨𝐧𝐭𝐞𝐧𝐝 𝐄𝐧𝐠𝐢𝐧𝐞𝐞𝐫, building intuitive interfaces for our product and ensuring seamless user experience 💻 A repeat founder - Diversified (Acquired by Konvi) and Nirror (Acquired by AB Tasty) - and seasoned software engineer, Tug was most recently a 𝐌𝐚𝐜𝐡𝐢𝐧𝐞 𝐋𝐞𝐚𝐫𝐧𝐢𝐧𝐠 𝐄𝐧𝐠𝐢𝐧𝐞𝐞𝐫 at SalesScan. There, he designed and orchestrated LLM agents to automate observations on sales opportunities. 🦸 Want to join us too? We're hiring for three roles: a 𝐑𝐞𝐬𝐞𝐚𝐫𝐜𝐡 𝐄𝐧𝐠𝐢𝐧𝐞𝐞𝐫, a 𝐃𝐞𝐯𝐎𝐩𝐬 𝐄𝐧𝐠𝐢𝐧𝐞𝐞𝐫, and another 𝐅𝐫𝐨𝐧𝐭𝐞𝐧𝐝 𝐄𝐧𝐠𝐢𝐧𝐞𝐞𝐫. Apply now 👉 https://lnkd.in/e_PEEbif

    • No alternative text description for this image
  • View organization page for Adaptive ML, graphic

    2,819 followers

    Adaptive ML is growing! 🌱 We’re looking for three exceptional engineers to join our Product and Technical teams - a 𝐑𝐞𝐬𝐞𝐚𝐫𝐜𝐡 𝐄𝐧𝐠𝐢𝐧𝐞𝐞𝐫, a 𝐃𝐞𝐯𝐎𝐩𝐬 𝐄𝐧𝐠𝐢𝐧𝐞𝐞𝐫, and a 𝐅𝐫𝐨𝐧𝐭𝐞𝐧𝐝 𝐄𝐧𝐠𝐢𝐧𝐞𝐞𝐫. Help us build agentic workflows with dozens of models interacting and teaching one another, scale the storage of user interactions to trillions of records, and build a first-class user experience for adapting and evaluating models from production feedback. 𝐑𝐞𝐬𝐞𝐚𝐫𝐜𝐡 𝐄𝐧𝐠𝐢𝐧𝐞𝐞𝐫: you will help build the foundational technology powering Adaptive ML, typically by contributing to our internal LLM Stack, Adaptive Harmony. Harmony is a unified inference+training system, which lowers users' requests and recipes into atomic instructions for GPU workers. The codebase makes extensive use of Rust for most of its logic, Python for end-users, and dedicated CUDA/Triton kernels for LLM performance. Apply now: https://lnkd.in/gfkeH_dD 𝐃𝐞𝐯𝐎𝐩𝐬 𝐄𝐧𝐠𝐢𝐧𝐞𝐞𝐫: you will help package our technology and turn it into an exceptional product. You will work on all DevOps aspects of our software: from systematic deployment to scaling production databases, as well as support internal workloads. Challenges you may face are likely to arise from coordinating complex GPU infrastructure, and scaling the storage of user interactions to trillions of records in a robust manner. Apply now: https://lnkd.in/gb226tXJ 𝐅𝐫𝐨𝐧𝐭𝐞𝐧𝐝 𝐄𝐧𝐠𝐢𝐧𝐞𝐞𝐫: you will work on the frontend powering our product, and be responsible for implementing new product features jointly with our other frontend and backend engineers. Challenges you may face are likely to arise from the visual display of complex quantitative information (e.g., large-scale A/B tests) in an honest, clean, and efficient manner. Apply now: https://lnkd.in/gJN-qMMv Pioneer production-tuned AI with us: https://lnkd.in/e_PEEbif 

    • No alternative text description for this image
  • View organization page for Adaptive ML, graphic

    2,819 followers

    A warm welcome to Pearson Probst, who joins us as Marketing Lead! Prior to Adaptive ML, Pearson was Global Head of Content at Palantir Technologies - leading product marketing strategy, as well as editorial across social, website, paid, video, and events. One of the foundational communications hires at Palantir, Pearson helped guide messaging from DPO to the company’s inclusion in the S&P 500. Want to join us too? Explore our open roles 👉 https://lnkd.in/e_PEEbif

    • No alternative text description for this image
  • View organization page for Adaptive ML, graphic

    2,819 followers

    Adaptive ML was honored to be featured in Motier Ventures's GenAI Show, alongside Mistral AI, poolside, Dust, Gladia, Payflows, and Twin. CEO Julien Launay and CTO Baptiste Pannier took the stage for a conversation with Marie Outtier on how we’re enabling production-tuned AI - with models learning continuously from production feedback to surpass frontier performance. It was a particular honor to speak with Clara Chappaz, France’s first Minister for Artificial Intelligence, regarding the future of AI implementation, model fine-tuning, and adoption across enterprises.

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
  • View organization page for Adaptive ML, graphic

    2,819 followers

    🚀 𝐍𝐞𝐰 𝐬𝐭𝐫𝐚𝐭𝐞𝐠𝐢𝐜 𝐀𝐈 𝐜𝐨𝐥𝐥𝐚𝐛𝐨𝐫𝐚𝐭𝐢𝐨𝐧! 🚀 Adaptive ML is pleased to announce a new strategic partnership with aïkan!   🤝 𝑶𝒖𝒓 𝒔𝒉𝒂𝒓𝒆𝒅 𝒈𝒐𝒂𝒍? To revolutionize the law industry with AI, by building the next generation of legal chatbots. 🔍 𝑶𝒖𝒓 𝒂𝒎𝒃𝒊𝒕𝒊𝒐𝒏? To create a custom language model (LLM) specifically designed and trained for legal tasks. Aïkan is a software provider that leverages their unique expertise at the intersection of law and artificial intelligence to build Juribot a legal chatbot for insurers. Building on their experience with Juribot.fr, which handles over 35,000 queries monthly, and supported by a team of data scientists and legal experts, they continue to train and improve their legalbot every day to provide the most reliable legal responses for both litigants and legal professionals. Recently funded with a record 20 million euros in seed capital, Adaptive ML offers expertise in cutting-edge reinforcement learning. This partnership marks a pivotal moment for AI integration in the 🇫🇷 French legal sector, with a clear objective: to offer 𝐜𝐮𝐬𝐭𝐨𝐦, 𝐫𝐞𝐥𝐢𝐚𝐛𝐥𝐞, 𝐚𝐧𝐝 𝐬𝐞𝐜𝐮𝐫𝐞 𝐋𝐋𝐌 𝐬𝐨𝐥𝐮𝐭𝐢𝐨𝐧𝐬 𝐭𝐡𝐚𝐭 𝐨𝐮𝐭𝐩𝐞𝐫𝐟𝐨𝐫𝐦 𝐜𝐥𝐨𝐬𝐞𝐝 𝐋𝐋𝐌 𝐀𝐏𝐈𝐬. Training has already begun, and we can't wait to share more with you in the coming months. Stay tuned!

    • No alternative text description for this image

Similar pages

Funding

Adaptive ML 1 total round

Last Round

Series A

US$ 20.0M

See more info on crunchbase