Excited to launch our weekly summary feature 🪩 Get a summary of your high level metrics + specific feedback on your workflows and custom evaluations right to your inbox every week 📨! Stay ahead of bugs or regressions with @lytixai
lytix ai
Technology, Information and Internet
The best way to include text analysis in your product analytics, and triage bugs
About us
- Website
-
https://lytix.co/
External link for lytix ai
- Industry
- Technology, Information and Internet
- Company size
- 2-10 employees
- Type
- Privately Held
Employees at lytix ai
Updates
-
LLM developers deserve some automation too! 🚀🙏
Super excited for this one 🚀 Get a weekly summary of your LLM stack, delivered straight to your inbox ✉️! Automatically get a set of trends and users that deserve your attention 👀 One thing I'm always hearing from current and prospective customers - LLM system are extremely complex and highly multi-variable. And even though developers are using LLMs to build automated systems for their customers, they're forced to review their data manually! We're hoping this is just a small step of many, to give LLM developers the same automation they're building for others 🔨
-
lytix ai reposted this
Day 3 🚨 Dropping the next batch of companies from The #AIHot100 Market Report (East Coast)....curated by The AI Furnace 🧨🔥 ft. 100 of the hottest AI startups (pre-seed to series A) 🧯💥 ➡️ Colossyan - Create engaging learning experiences with AI video ➡️ Ohai.ai - Say ohai to an AI Assistant for parents ➡️ lytix ai - Datadog for LLMs ➡️ Wild Moose - Helping on-call devs tame production chaos with Generative AI ➡️ Alinea Invest - AI money manager app ➡️ Cosmos - A Pinterest alternative for creatives ➡️ Opkit - Automating Routine Calls for Healthcare Providers with AI ➡️ NuMind (YC S22) - LLM-powered custom NLP ➡️ Wondercraft - the AI audio studio for creatives ➡️ Spinach AI - The AI Project Manager that automates meeting notes, tasks, and tickets We curated the Hot 100 list from over thousands of submissions, with a committee of AI, technical, startup and industry experts and brought these founders together at The #AIHot100 Conference in NYC last week. Some hidden 💎s here - this is the culmination of one year of building grassroots community with The AI Furnace 🧨🔥, getting to know founders, seeing under the radar startups, and some who cannot demo due to being in stealth. The report has Sector View, Layer View, List View and more. 👉 If interested in seeing the full list, sign up for our newsletter to see the full PDF version https://lnkd.in/eGpzsTQr #AIHot100 #aifurnace Dominik Mate Kovacs, Sheila Lirio Marcelo, Sahil Sinha, Sid Premkumar, Yasmin Dunsky, Eve Halimi, Andy McCune, Sherwood Callaway, Etienne Bernard, Oskar Serrander, Josh Willis
-
🏆💯
Honoured for lytix ai to join the #aihot100 by The AI Furnace 🧨🔥
-
🪩 We’ve pushed some awesome features over the past few weeks and wanted to take the time to showcase our favorite 🎉 https://lnkd.in/eiZspbft
-
🚀🚀🚀
lytix ai (YC W24) is the one control panel to observe, manage, and optimize your E2E LLM stack. It helps teams at any stage with model experimentation, inference guardrails, and custom evaluations. Experimenting with multiple models can be a hassle because setting up interfaces for each provider is slow and cumbersome. As your product becomes more specialized, tracking performance becomes difficult since standard metrics don’t capture the specific details you care about. Protecting your inference calls from failures is also a challenge, as finding and integrating the right guardrails can be time-consuming and confusing. Additionally, while simple logging is useful, it becomes hard to interpret and quickly find bugs, especially when multiple tasks are logged in one place. lytix helps teams of all sizes build faster, save money, and reduce tech debt. It provides a single gateway for all models and providers with just a one-line change. You can protect your inference calls with guardrails against known failures and set up real-time fallback logic. It also allows you to create custom evaluation metrics to track performance and quickly spot regressions. Additionally, lytix organizes logs by task, making it easier to debug issues and analyze key metrics, such as identifying the most expensive tasks. Build something people love with lytix today — it's free to start and use up to 100k messages. Congrats on the launch, Sid Premkumar and Sahil Sinha! 🚀 https://lnkd.in/gnKqyWcu
-
lytix ai reposted this
Meta released a ~1 month update on the Llama family of models. TLDR - Llama 3.1 is growing like crazy 📈, and Zuck clearly has his eye on enterprise 👀 https://lnkd.in/gS-hk-fB 👉 Growth data According to Meta - 1. Llama models are approaching 350 million downloads to date (more than 10x the downloads compared to this time last year) 2. Usage is up too - token volume has doubled in just 3 months, and monthly token volume is up 10x since same time last year. 3. Conclusion: “Llama is now the leading open source model family.” - Meta Enterprise - I was surprised how explicitly Meta’s blog post calls out the various enterprise use cases they’ve seen. It’s honestly an impressive variety of companies - showcasing projects from companies like Shopify and Zoom, but also traditional players like Accenture, Goldman Sachs and AT&T. A go ahead from companies like Accenture and At&t can go a long way to onboarding fellow non-tech forward conglomerates. It’s clear Meta wants enterprises of all kinds, to feel comfortable playing with Llama (Why? Check out our post here on Meta’s overall strategy with Llama, and how they’re thinking about enterprise https://lnkd.in/gR2Q5h2T) Open Source - Finally, the blog post reaffirms Meta’s commitment to open source AI. While this is certainly an attractive angle for them, I think it’s worth pointing out a couple caveats: - When Meta started ‘open sourcing’ their models, via releasing the model weights with Llama 2, it’s not clear they meant to do so. Llama2’s model weights were leaked on 4chan after Meta released Llama2 to a limited set of parties. It seems to be only after seeing the engagement from the open source community, Meta leaned into the ‘open weights’ model (https://lnkd.in/gp7xpJBi) - What even is open source for LLMs? Critics of meta have pointed out that this isn’t truly open source in the ways we’re used to. While open model weights can make customizations (like finetuning) incredibly easy, we’re still not seeing the actual data or code for generating Llama 3.1. (Don’t take my word for it, her’s the Open Source Initiative’s response to Meta’s claims that Llama 2 was open source due to publicly released model weights https://lnkd.in/gfXQ_ipG)
-
lytix ai reposted this
This LORAX does NOT 🙅 speak for the trees 🌲🌳🌴- he speaks for finetuned models everywhere 🔊 LORAX, by Predibase, is an open sourced service (https://lnkd.in/guKYfYXU) that helps AI developers train and host their own suite of small language models. Specifically finetuned models can be more cost-effective and reliable than general purpose models, especially for teams exiting the prototyping stage. But the developer experience of actually training and deploying a finetuned model, has been fiddly and cumbersome thus far. LORAX solves this in a couple major ways 👇 First, a declarative framework for training, deploying, and calling finetuned models. This provides a far smoother developer experience for training, deploying and calling finetuned models, reducing the lines of code you need to train a smaller model from 20+ lines of code to just 3-4. Second, LORAX makes hosting suites of small models cheaper, faster, and more scalable - all with less developer hassle. All your small models are hosted on a single GPU (more cost-effective than dedicating a resources for each model). LORAX also makes your deployment more memory efficient and scalable (via a strategy they call “tiered weight caching”). And for folks curious how this looks under the hood 👀 1. As the name suggests, LORAX finetunes via LORA. 2. For the declarative framework, LORAX is built on Ludwig (https://lnkd.in/gmJHhbju, an open source framework developed by Uber for declarative AI/ML development). 3. And for more context on LORAX and Predibase: LoRAX: Serve 1000s of Fine-Tuned LLMs on a Single GPU @ The Linux Foundation https://lnkd.in/g9sKpG_s A more hands on demo: https://lnkd.in/g8Bab697 Blogs n docs: https://lnkd.in/gSPCP32b, https://lnkd.in/gh38ssg2, https://lnkd.in/guKYfYXU
-
lytix ai reposted this
Well RIP 🪦 my inbox from folks asking for the code we went through 😅 Check out our latest cookbook 👨🍳 to walk through safeguarding your workflows with Optimodel (https://lnkd.in/g8wchawU)
Last week Sid and I had the opportunity to speak at Mindstone's August event in Toronto 🇨🇦. We shared Optimodel 🔀 , our open source tool for AI developers, designed to minimze costs and error rates on your inference calls. Big thanks as always to the Mindstone crew for the support (Joshua Wöhle and Alan Wunsche), and thanks to the Toronto AI community for the great turnout and questions! (if you're building ⚒️ give Optimodel a try, or at least a ⭐️ https://lnkd.in/g8wchawU)
-
lytix ai reposted this
In the spirit of "launch quickly 🚀 and then iterate 🔁", lytix ai is rolling out the v2 of Optimodel 🔀 - our open source framework for routing models and safeguarding LLM workflows 🛡️ in-code. Optimodel started as a side project to help developers automatically call the cheapest model available when making inference calls. We assumed that price per inference would get increasingly varied as more and more foundational models are released on more and more inference engines. In that environment, we thought devs would want an easy way to guarantee they were getting the best price for each inference call. After some ~lovely feedback from the kind folks on Hacker News 😅 (and our early lytix customers ❤️), we saw teams using Optimodel more to safeguard complex LLM workflows, than to minimize costs. So we reworked Optimodel to focus on helping teams make foundational models work for their product needs. 1. Check out the lytix blog for a full breakdown of Optimodel v2 👉 https://lnkd.in/gTsEQsAB 2. Give Optimodel a try yourself! https://lnkd.in/g8wchawU (and if you're not ready to build just yet, consider leaving a Github star ⭐️ 🙏) 👉 https://lnkd.in/g8wchawU