AI at Meta

AI at Meta

Research Services

Menlo Park, California 877,785 followers

Together with the AI community, we’re pushing boundaries through open science to create a more connected world.

About us

Through open science and collaboration with the AI community, we are pushing the boundaries of artificial intelligence to create a more connected world. We can’t advance the progress of AI alone, so we actively engage with the AI research and academic communities. Our goal is to advance AI in Infrastructure, Natural Language Processing, Generative AI, Vision, Human-Computer Interaction and many other areas of AI enable the community to build safe and responsible solutions to address some of the world’s greatest challenges.

Industry
Research Services
Company size
10,001+ employees
Headquarters
Menlo Park, California
Specialties
research, engineering, development, software development, artificial intelligence, machine learning, machine intelligence, deep learning, computer vision, engineering, computer vision, speech recognition, and natural language processing

Updates

  • View organization page for AI at Meta, graphic

    877,785 followers

    📣 Introducing Llama 3.2: Lightweight models for edge devices, vision models and more! What’s new? • Llama 3.2 1B & 3B models deliver state-of-the-art capabilities for their class for several on-device use cases — with support for Arm, MediaTek & Qualcomm on day one. • Llama 3.2 11B & 90B vision models deliver performance competitive with leading closed models — and can be used as drop-in replacements for Llama 3.1 8B & 70B. • New Llama Guard models to support multimodal use cases and edge deployments. • The first official distro of Llama Stack simplifies and supercharges the way developers & enterprises can build around Llama to support agentic applications and more. With Llama 3.2 we’re making it possible to run Llama in even more places, with even more flexible capabilities. Details in the full announcement ➡️ https://go.fb.me/8ar7oz  Download Llama 3.2 models ➡️ https://go.fb.me/7eiq2z These models are available to download now directly from Meta and Hugging Face — and will be available across offerings from 25+ partners that are rolling out starting today, including Accenture, Amazon Web Services (AWS), AMD, Microsoft Azure , Databricks, Dell Technologies, Deloitte, Fireworks AI, Google Cloud, Groq, IBM, Infosys, Intel Corporation, Kaggle, NVIDIA, Oracle Cloud, PwC, Scale AI, Snowflake, Together AI and more. We’ve said it before and we’ll say it again: open source AI is how we ensure that these innovations reflect the global community they’re built for and benefit everyone. We’re continuing our drive to make open source the standard with Llama 3.2.

    • No alternative text description for this image
  • View organization page for AI at Meta, graphic

    877,785 followers

    Following #ECCV2024 from your feed? Here are seven examples of interesting research work being presented by teams working on AI across Meta. 🔗 Your ECCV 2024 reading list 1. 𝗔𝗻𝗶𝗺𝗮𝗹 𝗔𝘃𝗮𝘁𝗮𝗿𝘀: Reconstructing Animatable 3D Animals from Casual Videos: https://go.fb.me/gs1w0y 2. 𝗗𝗶𝗴𝗶𝘁𝗮𝗹 𝗧𝘄𝗶𝗻 𝗖𝗮𝘁𝗮𝗹𝗼𝗴 𝗳𝗿𝗼𝗺 𝗥𝗲𝗮𝗹𝗶𝘁𝘆 𝗟𝗮𝗯𝘀 𝗥𝗲𝘀𝗲𝗮𝗿𝗰𝗵: https://go.fb.me/ur265h  3. 𝘂𝗖𝗔𝗣: 𝗔𝗻 𝗨𝗻𝘀𝘂𝗽𝗲𝗿𝘃𝗶𝘀𝗲𝗱 𝗣𝗿𝗼𝗺𝗽𝘁𝗶𝗻𝗴 𝗠𝗲𝘁𝗵𝗼𝗱 𝗳𝗼𝗿 𝗩𝗶𝘀𝗶𝗼𝗻-𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗠𝗼𝗱𝗲𝗹𝘀: https://go.fb.me/wj2ja4  4. 𝗦𝗮𝗽𝗶𝗲𝗻𝘀: Foundation for Human Vision Models: https://go.fb.me/5uksso  5. 𝗖𝗼𝗻𝘁𝗿𝗼𝗹𝗹𝗮𝗯𝗹𝗲 𝗛𝘂𝗺𝗮𝗻-𝗢𝗯𝗷𝗲𝗰𝘁 𝗜𝗻𝘁𝗲𝗿𝗮𝗰𝘁𝗶𝗼𝗻 𝗦𝘆𝗻𝘁𝗵𝗲𝘀𝗶𝘀: https://go.fb.me/dcgygy  6. 𝗔𝗗𝗲𝗻: 𝗔𝗱𝗮𝗽𝘁𝗶𝘃𝗲 𝗗𝗲𝗻𝘀𝗶𝘁𝘆 𝗥𝗲𝗽𝗿𝗲𝘀𝗲𝗻𝘁𝗮𝘁𝗶𝗼𝗻𝘀 𝗳𝗼𝗿 𝗦𝗽𝗮𝗿𝘀𝗲-𝘃𝗶𝗲𝘄 𝗖𝗮𝗺𝗲𝗿𝗮 𝗣𝗼𝘀𝗲 𝗘𝘀𝘁𝗶𝗺𝗮𝘁𝗶𝗼𝗻: https://go.fb.me/9brdhb 7. 𝗥𝗮𝘀𝘁𝗲𝗿𝗶𝘇𝗲𝗱 𝗘𝗱𝗴𝗲 𝗚𝗿𝗮𝗱𝗶𝗲𝗻𝘁𝘀: 𝗛𝗮𝗻𝗱𝗹𝗶𝗻𝗴 𝗗𝗶𝘀𝗰𝗼𝗻𝘁𝗶𝗻𝘂𝗶𝘁𝗶𝗲𝘀 𝗗𝗶𝗳𝗳𝗲𝗿𝗲𝗻𝘁𝗶𝗮𝗹𝗹𝘆: https://go.fb.me/kz2d83

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
      +2
  • View organization page for AI at Meta, graphic

    877,785 followers

    One year ago we opened applications for the first-ever Llama Impact Grants program seeking proposals from around the world to use open source AI to address challenges in education, environment and innovation. Now, we're excited to announce the recipients of our first grants with projects ranging from reading assessments in India to personalized maternal and newborn health support in Sub-Saharan Africa. See the full list of Llama Impact Grant and Llama Impact Innovation Award recipients ➡️ https://go.fb.me/khdznv

    • No alternative text description for this image
  • View organization page for AI at Meta, graphic

    877,785 followers

    We’re on the ground at #ECCV2024 in Milan this week to showcase some of our latest research, new artifacts and more. Here are four things you won’t want to miss from Meta FAIR, GenAI and Reality Labs Research this week whether you’re here in person or following from your feed. 1. We’re releasing SAM 2.1 an upgraded version of the Segment Anything Model 2 — and the SAM 2 Developer Suite featuring open source tools for training, inference and demos. Live in the Segment Anything repo on GitHub ➡️ https://go.fb.me/mk6ofh 2. We’re supporting 10+ presentations and workshops in areas like computer vision for smart glasses and the metaverse, 3D vision for eCommerce, egocentric research with Project Aria and more. 3 We’re presenting seven orals at ECCV — in addition to the 50+ publications from researchers at Meta that were accepted for this year’s conference. Look out for more details on some of these papers later this week. 4. Demos and discussions with Meta researchers at our booth all week — come by our booth to discuss projects like SAM 2, Ego-Exo4D, DINOv2 and more.

    • No alternative text description for this image
  • View organization page for AI at Meta, graphic

    877,785 followers

    Ready to start working with our new lightweight and multimodal Llama 3.2 models? Check out all of the newest resources in the updated repos on GitHub. Llama GitHub repo ➡️ https://go.fb.me/1sn5cb   Llama recipes ➡️ https://go.fb.me/3w78ol   Llama Stack ➡️ https://go.fb.me/ci7y5w   Model Cards ➡️ https://go.fb.me/2dtbbu  The repos include code, new training recipes, updated model cards, details on our new Llama Guard models and our first official release of Llama Stack.

    • No alternative text description for this image
  • View organization page for AI at Meta, graphic

    877,785 followers

    With Llama 3.2 we released our first-ever lightweight Llama models: 1B & 3B. These models outperform competing models on a range of tasks even at smaller sizes; feature support for Arm, MediaTek and Qualcomm devices; and empower developers to build personalized, on-device agentic applications with capabilities like summarization, tool use and RAG with strong privacy where data never leaves the device. We’ve shared more, including reference applications as part of the Llama 3.2 release. Details and model downloads ➡️ https://go.fb.me/vbjzj3

Affiliated pages

Similar pages