📣 Introducing Llama 3.2: Lightweight models for edge devices, vision models and more!
What’s new?
• Llama 3.2 1B & 3B models deliver state-of-the-art capabilities for their class for several on-device use cases — with support for Arm, MediaTek & Qualcomm on day one.
• Llama 3.2 11B & 90B vision models deliver performance competitive with leading closed models — and can be used as drop-in replacements for Llama 3.1 8B & 70B.
• New Llama Guard models to support multimodal use cases and edge deployments.
• The first official distro of Llama Stack simplifies and supercharges the way developers & enterprises can build around Llama to support agentic applications and more.
With Llama 3.2 we’re making it possible to run Llama in even more places, with even more flexible capabilities.
Details in the full announcement ➡️ https://go.fb.me/8ar7oz
Download Llama 3.2 models ➡️ https://go.fb.me/7eiq2z
These models are available to download now directly from Meta and Hugging Face — and will be available across offerings from 25+ partners that are rolling out starting today, including Accenture, Amazon Web Services (AWS), AMD, Microsoft Azure , Databricks, Dell Technologies, Deloitte, Fireworks AI, Google Cloud, Groq, IBM, Infosys, Intel Corporation, Kaggle, NVIDIA, Oracle Cloud, PwC, Scale AI, Snowflake, Together AI and more.
We’ve said it before and we’ll say it again: open source AI is how we ensure that these innovations reflect the global community they’re built for and benefit everyone. We’re continuing our drive to make open source the standard with Llama 3.2.