The summer of AI is only just beginning, and you need a really fast charger

The summer of AI is only just beginning, and you need a really fast charger


Good morning!

It isn’t often that Google CEO Sundar Pichai sits in on embargoed briefings with a few of the global tech media. That’s exactly what happened in one of the two select briefings I was part of, ahead of the annual I/O keynote. Pichai speaking with us, was a pleasant surprise – and in company were senior Google executives Demis Hassabis (who Pichai referred to as “Sir Demis” during the keynote!), Sissie Hsiao and Liz Reid. More so, Pichai’s presence was unexpected. You could look at this in two ways. Either Google is very confident about its direction and foundation with artificial intelligence models, products and positioning. Or, it was a sense of broader trepidation, hours ahead of a set of announcements that define a pivotal moment Google finds itself in, and its future with AI


That said, now that the I/O 2024 keynote excitement has settled, if Google is indeed worried about where it finds itself in the artificial intelligence (AI) stakes amidst intense competition, it isn’t about to let you in on it. They’ve detailed a vision that (and as it should be) is outwardly confident and is built on a vibrant yet focused strategy. More so, because their next set of AI implementation is about accessibility – and they have no problem with numbers, since their user base across all dimensions, is massive. There are reasons why I say that.

There’s an updated Gemini 1.5 Pro model with logical reasoning, a new Gemini Live that’ll get more capabilities later this year, a new and lightweight Gemini 1.5 Flash which is part amazing and part scary in equal measures (the ability to chat with a chatbot, while seeing the world through a phone’s camera, is strange), updates for Gemini Nano with multimodal improvements and AI Overviews in Search. Not to forget, generative AI’s promised big steps forward with realism have text-to-video model Veo and the text-to-image model Imagen 3 at the very center. And then there are Gems for custom Gemini implementation and extensions, which’ll make it integrate better across Google’s apps.

I spoke about reach earlier. There are more than 3 billion active Android users, the sort of reach no competing computing platform has (Windows 11’s Copilot integration reached potentially 500 million PCs). Google is reaching out to that base with AI. Their first tryst with AI. Microsoft’s similar approach with Copilot reaped rewards. Google’s chances of achieving success, even greater. They realise it is a long process, something Sameer Samat, who is vice president of Product Management for Android at Google told me that it is about, “reimagining Android's consumer experience and the way you interact with your phone with AI at the core, and that multi-year journey begins now.”

Then there’s utility. Google’s detailing of AI integration brings us to potentially unlock an agent working to organise all receipts in your Gmail into a spreadsheet, a tool that’ll help locate order details for a product you’d like to return and help with the process (global implementation will be difficult, and shopping sites will have their restrictive processes), planning for a trip, or having AI detect the intent to scam you in a voice call and alert you (this is still being tested; difficult to say where it’s headed).

Google I/O 2024: An AI chapter for Android, covering billions of users in one go

I am sure you are asking the question – when does my Android phone get the new AI features? That answer is in two parts. First, there will be Google Play services and system updates that will enable some AI functionality detailed at I/O. This may happen anytime in the next few weeks, depending on phone and the pace of the update roll-out. That’s in Google’s hands. The second stage involves chipmakers and phone makers to optimise experiences before the updates can be sent to users. More so for the latter, because work will be needed to make the functionality work with customisations that often define Android phones and tablets. Samsung’s One UI, Xiaomi’s HyperOS, OnePlus’ OxygenOS and so on.


Good morning!

It isn’t often that Google CEO Sundar Pichai sits in on embargoed briefings with a few of the global tech media. That’s exactly what happened in one of the two select briefings I was part of, ahead of the annual I/O keynote. Pichai speaking with us, was a pleasant surprise – and in company were senior Google executives Demis Hassabis (who Pichai referred to as “Sir Demis” during the keynote!), Sissie Hsiao and Liz Reid. More so, Pichai’s presence was unexpected. You could look at this in two ways. Either Google is very confident about its direction and foundation with artificial intelligence models, products and positioning. Or, it was a sense of broader trepidation, hours ahead of a set of announcements that define a pivotal moment Google finds itself in, and its future with AI.

Google I/O 2024: Google Gemini plans for the present, and a complex future

 

 

 

That said, now that the I/O 2024 keynote excitement has settled, if Google is indeed worried about where it finds itself in the artificial intelligence (AI) stakes amidst intense competition, it isn’t about to let you in on it. They’ve detailed a vision that (and as it should be) is outwardly confident and is built on a vibrant yet focused strategy. More so, because their next set of AI implementation is about accessibility – and they have no problem with numbers, since their user base across all dimensions, is massive. There are reasons why I say that.

There’s an updated Gemini 1.5 Pro model with logical reasoning, a new Gemini Live that’ll get more capabilities later this year, a new and lightweight Gemini 1.5 Flash which is part amazing and part scary in equal measures (the ability to chat with a chatbot, while seeing the world through a phone’s camera, is strange), updates for Gemini Nano with multimodal improvements and AI Overviews in Search. Not to forget, generative AI’s promised big steps forward with realism have text-to-video model Veo and the text-to-image model Imagen 3 at the very center. And then there are Gems for custom Gemini implementation and extensions, which’ll make it integrate better across Google’s apps.

I spoke about reach earlier. There are more than 3 billion active Android users, the sort of reach no competing computing platform has (Windows 11’s Copilot integration reached potentially 500 million PCs). Google is reaching out to that base with AI. Their first tryst with AI. Microsoft’s similar approach with Copilot reaped rewards. Google’s chances of achieving success, even greater. They realise it is a long process, something Sameer Samat, who is vice president of Product Management for Android at Google told me that it is about, “reimagining Android's consumer experience and the way you interact with your phone with AI at the core, and that multi-year journey begins now.”

Then there’s utility. Google’s detailing of AI integration brings us to potentially unlock an agent working to organise all receipts in your Gmail into a spreadsheet, a tool that’ll help locate order details for a product you’d like to return and help with the process (global implementation will be difficult, and shopping sites will have their restrictive processes), planning for a trip, or having AI detect the intent to scam you in a voice call and alert you (this is still being tested; difficult to say where it’s headed).

Google I/O 2024: An AI chapter for Android, covering billions of users in one go

I am sure you are asking the question – when does my Android phone get the new AI features? That answer is in two parts. First, there will be Google Play services and system updates that will enable some AI functionality detailed at I/O. This may happen anytime in the next few weeks, depending on phone and the pace of the update roll-out. That’s in Google’s hands. The second stage involves chipmakers and phone makers to optimise experiences before the updates can be sent to users. More so for the latter, because work will be needed to make the functionality work with customisations that often define Android phones and tablets. Samsung’s One UI, Xiaomi’s HyperOS, OnePlus’ OxygenOS and so on.

BATTLEFRONT

The world knew for a while about the date and day of Google I/O 2024 keynote. Enough intrigue generated by itself, then dialled up when OpenAI teased an announcement a day earlier. The social media opinions went into overdrive (it’s quite easy, nothing out of the ordinary). Would we see the speculated Apple and OpenAI partnership confirmation? Would the GPT-5 model be released? What surprise does OpenAI have in store for us?

Turns out, it was an iterative GPT-4o model. It is up to 2x faster than GPT-4 Turbo, improved multimodal capabilities and as OpenAI CTO Mira Murati (good time to plug my column about Murati and the art of the poker face) describes, has improved “capabilities across text, vision, and audio.” A key feature is real-time translations. That comes from a new voice mode. I’m yet to try this (it’s been a very busy week), but you can within the ChatGPT app now (or at some point this week) even if you are on the free tier. This will be directly in competition with Google’s Gemini Live. Ah, the battles continue.

Interestingly, OpenAI’s ChatGPT app for Macs comes first. It can work as a chatbot as you perhaps have used it within the web browser or on the phone. Or it can be opened as a smaller window alongside an already opened app, giving it access to see what you see – and the conversation begins from there. It may be difficult to imagine what Microsoft, which has invested more than $10 billion in OpenAI till now, must be thinking. Is it a case of that’s where the ChatGPT users are, or where the ChatGPT users will be in the near future? Get the hint!

CHARGE

Let us lighten the mood a bit. All this AI talk can get stressful, isn’t it? Let’s talk fast charging systems. They’re cool, relevant and not as easy to make as we imagine. In the past few days, I wrote about a charging hub, the Acefast Z4 PD218W GaN. The difference between a fast-charging hub and a fast charger is with the design, which dictates the approach. A hub, which is multiple ports for simultaneous charging (some chargers too have 2-3 ports) is slightly bigger in footprint. That allows for a bump in the top spec, and subsequently, the flexibility of charging multiple fast charging things. And then, unlike other charging equipment, you can select which device needs how much charging trickle.

With 218-watts as its overall capacity that’s divided between the three USB-C ports and the single USB-A port, you can configure this to charge laptops and fast charging phones simultaneously. I’ve detailed how the four modes work – the most interesting is 100-watt each for two USB-C ports, and a single charging hub is powerful enough to run two Apple MacBook Pro 14-inch with M3 Max chips simultaneously (these require a 96-watt adapter each). Charging phones and tablets, a breeze. For around Rs 8,500 (there are often deals and discounts too), this is an ideal accessory to have on your work desk or even the bedside table.



Bakul Bhatt

A Blogger and a SWA Member/Creative writer/, Singer.

1mo

Thanks for sharing

Like
Reply
Mohit Singh Choudhary

Web Content Writer for Tech Businesses | Geek @smartphonedose newsletter

1mo

Brands tweaking their UIs to build room for AI is a very smart move. Seems everyone is ready to embrace AI.

Like
Reply

To view or add a comment, sign in

Explore topics