I'm excited to share about this session that is happening at "The Fifth Elephant 2024 Annual Conference"
Building and Deploying LLM Applications: From Concept to Production - AMA with Mixture-of-Experts Session 🧠
(https://lnkd.in/gfuHrS8M)
*Motivation:*
Building LLM applications is about getting a dataset, indexing it in a Vector db, slapping an OpenAI wrapper, and packaging it in Gradio. And LangChain can do the majority of this work. Boom! You are ready to deliver a product in 10 minutes. This happy workflow is the popular narrative.
But real life throws many curveballs at you and it is not as simple as it was made to believe. We have seen it all with run-of-the-mill chat-with-your-PDF apps - only to be left frustrated, the moment we want to scale to more varied documents!
Many difficult design & build choices have to be made that require asking general but foundational questions such as:
What to solve - problem formulation? Why LLMs?
How to solve - problem breakup, task prioritization, timelines & deliverables, team composition, etc.
What to optimize and trade-off: cost, performance, correctness,
What does an MVP look like for my problem?
Buy vs Build? When to switch from commercial models to local models? Wrappers or custom models?
How to handle private data? Is using the open-source LLMs the only way to handle PII data? Can the proprietary LLMs handle it?
How to evaluate - the system and its parts, A/B testing, and evaluation?
Should one start with RAG, fine-tuning right away or use frameworks like DSPY or prompt engineering? How does one solve the cold start problem and generate synthetic datasets?
How does one incorporate Agentic workflows and design patterns? Is it feasible to leverage the existing frameworks or just roll out one?
How does one monitor the costs, latencies and relevant metrics with LLMs ?
What are the SLAs and how to achieve them?
How are LLMs being applied in verticals such as Health, Education, Healthcare, Deep tech, FinTech, and Agriculture, among others, and in both for-profit and not-for-profit settings?
Or they can be very specific such as “How do I reduce cost from INR 1/- per conversation to INR 0.1/- per conversation?”
You are not alone. Our MoEs have gone through it all. Engage with Mixture-of-Experts to go over the entire dev cycle in different verticals.
*The Mixture of Experts (MOE)*
Chintan Donda, Senior ML Engineer, Wadhwani AI
Sai Nikhilesh Reddy, Associate ML Scientist, Wadhwani AI
Pulapakura Sravan, Software Associate, Data Science & Programming, JP Morgan Chase
Prathamesh Saraf, Gen AI Backend Engineer, TrueFoundry
Rajaswa Patil, Applied AI, Postman
Praveen Pankajakshan, Chief Scientist, CropIn AI Lab, CropIn
*Host*
Soma Dhavala, Founder ML Square, ex-Wadhwani AI.
Come over for an enriching discussion spanning various facets of the LLM app life cycle in production.
Zainab Harshad Vikram