I never owned another smartphone apart from an iPhone until this year. However, while AI goes to all technological products on the planet, I had to try Android to understand the differences between artificial intelligence in the two ecosystems.
After using a Samsung Galaxy S25 for a few weeks, I went back to my iPhone 16 Pro Max. Not because it was better, but because the ecosystem in which you built your life is equivalent to the decisive factor when it comes to choosing between flagship smartphones.
Once back on iOS, I found myself missing a specific AI functionality more than the others, and without access to iPhone, I quickly lacks to live with an Android device.
This functionality of the AI I’m talking about is Gemini Live, and even if you can access it on iOS, the experience has been stupid. It was until yesterday, at Google I / O 2025, when Google announced that all the capabilities of Gemini Live take place on iPhone and at no cost.
Here is why Gemini Live is the best AI tool I have ever used, and how to add all of his iPhone capabilities means that I am ready to return to Apple.
What visual intelligence wanted to be
Gemini Live already existed in the Gemini application on iOS, but it lacked two crucial elements that make the Android version much better. First, Gemini Live on iOS could not access the camera of your iPhone, and secondly, he could not see what you were doing on your screen. E / S 2025 has changed it all.
Now, iPhone users can give Gemini live access to their camera and screen, allowing new ways to interact with AI that we have not really seen on iOS before.
The capacity of Gemini’s camera alone is one, if not the best AI tool, I have used to date, and I am delighted that iPhone users can now experience it.
What is Gemini Live’s camera function? Well, imagine a better version of what Apple wanted visual intelligence to be. You can just show Gemini everything you look at and ask questions without needing to describe the subject.
I found that the functionality of the gemini live camera thrives in situations like the kitchen. I used it last week to make Birria tacos, and not only did I give advice at each stage, but he could also see everything I did and help myself head to a delicious dinner.
Not only does the deletion of my S25 on a stand gave Gemini Live the perfect angle, but as it can connect to Google applications, I could ask him to obtain information on a recipe directly from the video creator video. No need to constantly touch your phone with dirty hands in the kitchen, and no need to check a recipe. Gemini Live can do anything.
A companion has at each stage of the path
Screen sharing allows Gemini Live to see what is on your display at any time, allowing you to ask imaging questions, something on which you work on, or even how to finish a puzzle in a game. It’s really cool, similar to Siri fueled by Apple Intelligence, we were promised but never received at WWDC 2024.
Gemini Live’s free deployment has just started, so we don’t see how this feature will work on iOS. That said, if it works half as well as on Android, it will be a feature that I could see a lot of people fall in love.
Gemini Live and its multiple ways of interacting with the world completely unlock AI on a smartphone, and now that iPhone users can also access it, I have no reason not to return to the Apple ecosystem.
@Techradar ♬ Sound Original – Techradar