Skip to main content

A New Standard in Open Source AI: Meta Llama 3.1 on Databricks

Announcing the availability of Llama 3.1 models on the Databricks Data Intelligence Platform
Ahmed Bilal
Ankit Mathur
Hanlin Tang
Patrick Wendell
Share this post

We are excited to partner with Meta to release the Llama 3.1 series of models on Databricks, further advancing the standard of powerful open models. With Llama 3.1, enterprises can now build the highest-quality GenAI apps without sacrificing ownership and customization for quality. At Databricks, we share Meta's commitment to accelerating innovation and building safer systems with open language models, and we are thrilled to make the suite of new models available to enterprise customers right from day one.

We have integrated Llama 3.1 natively within Databricks, making it easy for customers to build their applications with it. Starting today, Databricks customers can use Mosaic AI to serve and fine-tune the Llama 3.1 models, connect them seamlessly to Retrieval Augmented Generation (RAG) and agentic systems, easily generate synthetic data for their use cases, and leverage the models for scalable evaluation. These capabilities enable enterprises to take full advantage of their unique organization’s data with the highest quality open source model to build production-scale GenAI applications.

"I believe open source AI will become the industry standard and is the path forward. Partnering with Databricks on Llama 3.1 means advanced capabilities like synthetic data generation and real-time batch inference are more accessible for developers everywhere. I'm looking forward to seeing what people build with this."
— Mark Zuckerberg, Founder & CEO, Meta

Start using the highest-quality open models on Databricks today! Visit the Mosaic AI Playground to quickly try Meta Llama 3.1 and other Foundation Models directly from your workspace. For more details, see this guide.

What's New in Llama 3.1 Models?

Llama 3.1 models are the most capable open models to date and introduce many new capabilities, including:

  • Meta Llama 3.1-405B-Instruct is the world’s highest-quality open model today. It features an unmatched reasoning capability, steerability, and general knowledge that rivals the best AI models. These capabilities enable building complex applications previously impossible with open models.
  • Improved quality of existing 8B and 70B models, already used by over a thousand Databricks customers. On Databricks, you can easily move to the new models and instantly benefit from improved quality without any changes.
  • Expanded context length of 128k tokens, enabling analysis of large datasets and improving RAG applications by reducing hallucinations through access to more relevant context.
  • Support across 8 languages, allowing businesses to reach and engage with a broader customer base effectively.
  • Improved tool use and function calling, allowing the creation of complex multi-step agentic workflows that can automate sophisticated tasks and answer complex queries.
  • Upgraded LlamaGuard model and Safety Models, enabling secure and responsible deployment of Compound AI Systems for enterprise use cases.

Meta Llama 3.1 Model Collection

Meta Llama 3.1-8B-Instruct: An excellent small model offering fast responses at an unbeatable cost. Ideal for document understanding tasks like metadata extraction and summarization and building fast customer interaction applications. Can be fine-tuned to exceed the quality of closed models for narrower enterprise tasks.

Meta Llama 3.1-70B-Instruct: This model balances intelligence and speed and is suitable for a wide range of enterprise workloads. It excels in use cases such as chatbots, virtual assistants, agentic workflows, and code generation.

Meta Llama 3.1-405B-Instruct: The highest quality open-source model, ideal for advanced use cases requiring complex reasoning and high accuracy in general knowledge, math, tool use, and multilingual translation.  It excels in use cases such as advanced multi-step reasoning workflows, content generation, research review, brainstorming, and advanced data analysis. Can also be used as a judge for quality, and to generate synthetic data to improve smaller LLMs. 

Developing with Llama 3.1 on Databricks Mosaic AI

Experiment with Llama 3.1 and Other Foundation Models

Llama 3.1 family of models is now available in the system.ai catalog (within Unity Catalog) and can be easily accessed on Mosaic AI Model Serving using the same unified API and SDK that works with other Foundation Models. The unified interface allows you to easily experiment with, switch between, and deploy models within the Llama 3.1 collection and compare them to other foundation models from any provider. This flexibility ensures you can select the best model to meet the quality, latency, and cost requirements of your application. These models are available in Azure Databricks, as well as in Databricks on AWS.

A New Standard in Open Source AI: Meta Llama 3.1 on Databricks

Extend Llama 3.1 with Your Proprietary Data to Improve Quality

Enterprises on Databricks are already using Mosaic AI Model Training to customize Llama models with their unique data, specializing them for specific business contexts and skills to build higher quality models. Customers can now benefit from customizing the new models, taking advantage of the extended context length and improved base quality of the 8B and 70B models, thereby improving overall application quality and opening up new use cases. 

Model Training now also supports Llama 3.1 405B, enabling enterprises to customize an open model with reasoning and capabilities on par with the leading AI models. These upgrades will roll out across regions as capacity comes online.

Deploy Intelligent Agents and RAG Apps with Llama 3.1

RAG applications and Agents are the most popular GenAI applications on our platform, and we are excited about the new tool-use capabilities in Meta Llama 3.1.

With the newly introduced Mosaic AI Agent Framework and Evaluation, enterprises can use Meta Llama 3.1 to build the highest quality AI systems, augmenting it with their proprietary data by using Mosaic AI Vector Search. We offer the only Vector Search offering that is tightly integrated into your data platform, ensuring all downstream applications are safely governed and managed via a single governance layer, the Unity Catalog. 

Additionally, customers can already use Llama models for function calling, and the new updates will further improve the quality of these workflows.

Together, these capabilities empower developers to create custom agents and explore new agentic behaviors in a single platform, unlocking a broader spectrum of use cases.

Accelerate Model Training and Evaluation with Synthetic Data Generation

With the permissive license of Llama 3.1 and the Llama 3.1-405B-Instruct model’s superior quality, for the first time, enterprises can enhance their data flywheel with high quality synthetic data. In other words, when customizing your model with Model Training, you can automatically show samples from your dataset to the larger model and ask it to generate similar data. 

Databricks makes this workflow easy through integration with the Foundation Model API and Foundation Model Training services, which can augment your Unity Catalog dataset all within the secure boundaries of the Data Intelligence Platform. We think this will transform customization quality and supercharge enterprise GenAI applications.

Customers Innovate with Databricks and Open Models

Many Databricks customers are already leveraging Llama 3 models to drive their GenAI initiatives. We’re all looking forward to see what they will do with Llama 3.1.

  • "With Databricks, we could automate tedious manual tasks by using LLMs to process one million+ files daily for extracting transaction and entity data from property records. We exceeded our accuracy goals by fine-tuning Meta Llama3 8b and, using Mosaic AI Model Serving, we scaled this operation massively without the need to manage a large and expensive GPU fleet." - Prabhu Narsina, VP Data and AI, First American
  • “With Databricks, we were able to quickly fine-tune and securely deploy Llama models to build multiple GenAI use cases like a conversation simulator for counselor training and a phase classifier for maintaining response quality. These innovations have improved our real-time crisis interventions, helping us scale faster and provide critical mental health support to those in crisis.”  - Matthew Vanderzee, CTO, Crisis Text Line
  • "With Databricks' unified data and AI platform and open models like Meta Llama 3, we have removed silos and simplified deployment, enabling us to deploy GenAI systems into production 20 times faster. This has allowed us to integrate GenAI more deeply across our product surface, leading to improvements in our overall product, operations, and overall efficiency." - Ian Cadieu, CTO, Altana
  • Databricks is enabling us to take an idea to production in record time. By using smaller, state-of-the-art open models like Llama and customizing them with our data, we created a high-quality and cost-effective GenAI solution. Remarkably, this was developed by just one person and is already improving the productivity of our internal teams." - Thibault Camper, Senior Data Scientist, Locala
  • “Mosaic AI and state-of-the-art open models like Llama 3 empower us to create and securely deploy custom models based on our own data and business rules. This is allowing us to build novel GenAI features, automating 63% of tasks and enabling our development team to focus on innovation rather than manual processes.”  - Guilherme Guisse, Head of Data and Analytics, Orizon

Getting started with Llama 3.1 on Databricks Mosaic AI

Visit the AI Playground to quickly try Llama 3.1 directly from your workspace. For more information, please refer to the following resources:

These capabilities are rolling out throughout supported geographies based on compute availability.

  翻译: