Understanding Apple Intelligence, Apple’s Proprietary AI Model: Here's How It Compares to Other Models and How It Will Use ChatGPT

Understanding Apple Intelligence, Apple’s Proprietary AI Model: Here's How It Compares to Other Models and How It Will Use ChatGPT

  • Apple has built an AI model with 3 billion parameters.

  • ChatGPT will work as an extra layer when Siri doesn’t know the answer, but it won’t be exclusive.

No comments Twitter Flipboard E-mail
Apple Intelligence

Apple’s AI isn't what we thought it would be. When the company's collaboration with OpenAI leaked, it seemed like Sam Altman’s company would have much more influence, but this wasn’t the case. Apple Intelligence is an artificial intelligence model developed entirely by Apple and has nothing to do with ChatGPT.

While Apple's partnership with OpenAI is official, there are two separate projects on the table. On hand hand, there's Apple Intelligence and Siri, and on the other is the incorporation of ChatGPT. In this post, we'll explain what Apple has developed, OpenAI’s role, and how Apple Intelligence is positioned in the AI battle of artificial intelligence, where this company has joined the fray with its own weapons.

During the Apple Intelligence presentation, Apple executives showed off the AI model's features and explained its integration into iOS 18, iPadOS 18, and macOS Sequoia. They also revealed that it would have two versions, one that was "on-device" and local and another that was privately connected to the cloud with “Private Cloud Compute.” However, Apple didn’t provide any technical details about the model.

Fortunately, Apple’s Machine Learning Research team has published a small report on the technical specifications of the language model (LLM) that powers Apple Intelligence. Below we'll break down its key features, which will help us better understand how far Apple’s AI, presented to us at WWDC 2024, will go.

An In-House AI Model With 3 Billion Parameters

Apple Intelligence is a set of AI models specializing in “everyday user tasks.” Apple tuned these models to excel at tasks like writing, summarizing, creating images, and simplifying app interactions. In other words, we’ll be able to perform tasks by using Siri on the iPhone.

Apple’s AI model has about 3 billion parameters for the on-device version and an even larger model that runs on Apple servers powered by Apple silicon chips and renewable energy. In simple terms, there's an “on-device” AI model and another that runs on Apple servers.

Apple has limited itself to using these two AI models but points out that it will unveil new models soon.

Compared to its direct rival, Apple Intelligence is almost identical to Gemini Nano. If Apple offers 3 billion parameters, Google's Gemini Nano has two versions: One with 1.8 billion for devices with low RAM and another with 3.25 billion parameters for the Pixel and other high-end Android devices.

It relies on Apple AXLearn, an open-source project launched in 2023, to train its model. As for the content used to build this model, Apple explains that it utilized licensed data and publicly available content collected by its web crawler, AppleBot.

Apple notes that it never uses private user data or interactions to build this AI model and applies filters to remove any personally identifiable user information from the web. It also applies filters to remove low-quality or duplicate data. Finally, the company points out that websites that don’t want their content used for Apple Intelligence can opt out. These aren't new arguments, which tells us that Apple has followed the traditional recipe when creating its AI.

Apple's adapters

Apple also optimized the training of its AI model. For one, the company says it's reduced the amount of memory required. The “on-device” version has 49K, while the server model has 100K. In comparison, the Gemini Nano has a token limit of 32K.

As we explained earlier, Apple has trained its AI model to perform specific tasks. The company defines them as “adapters,” a set of particular parameters to improve different tasks. For example, Apple uses 750 specific questions to evaluate whether the answer is correct. Among the functions where Apple has applied this AI are “brainstorming, classification, closed-ended question answering, programming, extraction, mathematical reasoning, open-ended question answering, rewriting, security, summarizing, and writing.”

On Par With GPT-4 Turbo on Specific Tasks

Apple includes several benchmarks and compares itself to some of today’s leading AI models. Its on-device AI model can be compared to Microsoft’s Phi-3 mini, Google’s Mistral 7-B, and Gemma 7-B. Meanwhile, its model on Apple servers can be compared to GPT-4 Turbo and Mistral 8x22B.

IFEval Benchmarks

According to the IFEval benchmark, which measures the capability of AI to follow instructions, Apple Intelligence models are on par with and outperform open source and commercial models of similar size.

There are also several writing and text summarization benchmarks where Apple’s AI excels. However, all of the benchmarks Apple chose are unique. We’ll have to wait for future tests to see if Apple’s AI can compete with the leading models or only match them on the tasks Apple has specifically trained it for.

Apple has adopted a customization strategy for its AI, but there is no sign of multimodality. It can work with text and voice, but there’s no reference to video, as Google does with Project Astra or OpenAI with GPT-4o. This is such a shame because a future Apple Vision Pro could use it.

How Apple Protects Information When Sending It to the Cloud

The promise of Apple Intelligence is this: AI functions will be performed on device and processed using the “on-device” AI model. However, if the task is too complex, it will connect to Apple’s servers.

The goal of Apple Intelligence is to perform most tasks directly on the iPhone, iPad, or Mac. To do this, Apple requires at least an iPhone 15 Pro (with the A17 Pro) or an M1 processor or higher. Private Cloud Compute kicks in if Apple’s processor isn’t powerful enough for the task you want the AI to perform. Specific cases where this could happen include image analysis, summarizing a large text, or searching for information on the web.

Apple says it's created a dedicated cloud infrastructure for AI, one that it promises is “the most secure and advanced ever created for AI in the cloud." So, what is it doing differently? Here are some of Apple's promises.

For example, the data sent is encrypted end-to-end. When that’s impossible, Apple “strives to process user data ephemerally or under uncorrelated random identifiers that obscure the user’s identity.”

Apple recognizes that security and privacy in the cloud are challenging to verify and guarantee. The company will release more details once Private Cloud Compute is available in beta to developers. Still, it promises to enforce specific requirements, such as “user data can only be used for the purpose the user has requested” and that none of this data will be visible "even to Apple employees.

Apple intends to use no external elements for security at the component level. For error analysis or server metrics, it will opt for high-level privacy services.

The role of Apple’s employees here is relevant. The company states that Private Cloud Compute won’t provide them with additional permissions to bypass restrictions. Finally, it states that it will allow any security researcher to review the operation of its infrastructure to verify that it’s meeting data protection promises.

Siri Will Connect to ChatGPT, But It Won’t Be Exclusive

Siri

As you can see, Apple didn’t involve OpenAI at any point in the development of Apple Intelligence, so where’s the partnership? By the end of this year, Apple will integrate ChatGPT into Siri and its writing tools.

This means that for specific questions, Siri will refer the query to ChatGPT, and the chatbot will answer users directly. As such, Siri users will have free access to ChatGPT without creating a new account.

There is one point that Apple explained but didn't demonstrate When users want to use ChatGPT, Siri will send them a notification. Once users agree to use of ChatGPT, they enter a different terrain. When users provide access to ChatGPT, Apple will transfer their data to OpenAI servers.

However, Apple promises that OpenAI won’t store requests and will hide users’ IPs, in accordance with the promise of Private Cloud Compute. This phase of data processing raises the most privacy concerns.

Apple's partnership with OpenAI won’t be exclusive. During the presentation, Craig Federighi, Apple’s senior vice president of software engineering, explained that Siri could add new AI systems beyond ChatGPT. Not even Google's Gemini was excluded. With this move, Apple has an ace up its sleeve to find new allies, continue to develop its own AI, stand out with an open system amid regulatory scrutiny, and cover markets where ChatGPT is unavailable.

This is the case in China, where Apple and its iPhones are very popular. With this Apple Intelligence strategy, Siri could work alongside AI systems from companies like Baidu or even state-owned AI. In essence, Apple has created its own AI model, but it's also opened the door for other companies to help it get to places that are out of its reach.

Images | Apple

Related | Apple Intelligence, iOS 18, Siri With ChatGPT, macOS 15 Sequoia, and Everything Else Announced at WWDC 2024

Home o Index
  翻译: