In the 1990s, web browser upstart Netscape bragged about operating on "Internet time," a term that roiled Microsoft cofounder Bill Gates so badly that he once sputtered that the only way to win that battle was for his company to ship new products at double that speed.
Gates was intensively competitive, but that was a different era, one in which Microsoft often took several years to release new versions of Windows and Office. These days, "Internet time" seems hopelessly quaint, and the software giant has for years shipped software updates at a far more frenetic pace.
Well, things are speeding up yet again.
Thanks largely to an unexpectedly aggressive Microsoft—which announced what we now call Copilot less than 18 months ago—we're now living in "AI time." In this unpredictable new era, it’s reasonable to expect to wake up to a new AI-based innovation almost every single day.
Frankly, this pace is unsustainable, and you don't have to be an AI denier to see that. AI is just software, and it's still constrained by the same constraints as any software, albeit on a far faster delivery schedule than ever before. And AI's "put up or shut up" moment is upon us: Between Google I/O (last week), Microsoft Build (this coming week), and Apple's WWDC (in June), we should head into late 2024 with a much clearer idea of what we can expect over the next several months.
This much is clear. We're on the cusp of a new phase in this AI era, one in which the expensive, cloud-based capabilities we're still struggling to grasp will be augmented and, in some cases, even replaced by new on-device AI capabilities that require major changes to the smartphones, PCs, and web browsers we use every day.
I am referring, of course, to hybrid AI. Moving AI workloads from the cloud to our devices is the holy grail for Big Tech, and while the benefits to users have been murky to date, Google I/O provided a template I expect to be repeated at Build and WWDC. That is, it's informative to examine the hybrid AI capabilities that Google just announced, mostly for Android, and then consider how those types of changes might improve Windows PCs, Macs, and iPhones and iPads.
Today, all the mainstream AI services we access are delivered from cloud datacenters at great costs. But implementing hybrid AI isn't a simple client-server scenario. Instead, it requires major investments by platform makers, and its success is dependent in part on the willingness of the user base to upgrade the devices they use. And that will require some marketing: The ability to remove or blur the background of an image slightly faster isn't going to inspire anyone to spend $1000+ on a new AI PC.
New generations of phones, tablets, and PCs will of course ship with Neural Processing Units (NPUs), special processors that accelerate local workloads. But less obviously, these devices will also require CPU and GPU upgrades, and they will require far more RAM than is the case today. Lookin...
With technology shaping our everyday lives, how could we not dig deeper?
Thurrott Premium delivers an honest and thorough perspective about the technologies we use and rely on everyday. Discover deeper content as a Premium member.