Google has expressed its Astra project as the next generation of AI for months. This has established high expectations when 60 minutes sent Scott Pelley to experiment with the Astra Tools project provided by Google Deepmind.
He was impressed by the way in which the articulation, the observer and the insight of the AI proved throughout his tests, especially when the AI has not only recognized the bad mood painting of Edward Hopper, but also read the body language of the woman and turned a fictitious thumbnail of her life.
All this through a pair of intelligent glasses which seemed barely different from a pair without integrated AI. The glasses serve as a delivery system for an AI that sees, hears and can understand the world around you. This could prepare the field for a new Smart Wearable race, but this is only the many things we learned during the Project Astra segment and Google’s plans for AI.
To watch
Understanding Astra
Of course, we have to start with what we now know about Astra. First, the AI assistant continuously treats the video and audio of the cameras and microphones connected in its environment. AI does not only identify objects or transcribes text; He also claims to spot and explain the emotional tone, extrapolate the context and continue a conversation on the subject, even when you take a break to think or talk to someone else.
During the demo, Pelley asked Astra what he looked at. He instantly identified the coal site, a retail complex in King’s Cross and offered background information without missing a beat. When a painting was shown, it did not stop at “it’s a woman in a cafe”. He said she looked “contemplative”. And when he was pushed, it gave him a name and a background.
According to the CEO of Deepmind Demis Hassabis, understanding the real world of the assistant progresses even faster than expected, noting that it is preferable to give meaning to the physical world than engineers did not think at this stage.
Veeo 2 views
But Astra does not passively look. Deepmind was also busy teaching AI how to generate photorealistic images and videos. The engineers described how two years ago, their video models had trouble understanding that the legs are attached to dogs. Now they have shown how Veo 2 can talk about a flying dog with swinging wings.
The implications for visual narration, cinema, advertising and yes, augmented reality glasses are deep. Imagine your glasses not only telling you which building you look at, but also visualizing what it looked like a century ago, rendered in high definition and integrated in a transparent way in the current view.
Genius 2
And then there is Genie 2, the new Deepmind world modeling system. If Astra understands the world as it exists, Genie builds worlds that do not. You need a motionless image and transforms it into an explorable environment visible through smart glasses.
Advance and Genie invents what is happening at the corner of the street. Turn left and he fills the invisible walls. During the demo, a waterfall photo turned into a level of playable video game, dynamically generated as Pelley explored.
Deepmind already uses spaces generated by geniuses to form other AIS. Genie can help them navigate in one world composed by another AI, and in real time too. One dream system, another learns. This type of simulation loop has enormous implications for robotics.
In the real world, robots must get their way through tests and errors. But in a synthetic world, they can train without end without breaking the furniture or risking prosecution.
Astra Eyes
Google tries to put astra style perception between your hands (or on your face) as quickly as possible, even if it means giving it.
Only a few weeks after the launch of the screen sharing and camera features of Gemini as a higher advantage, they reversed the course and made them free for all Android users. It was not a random act of generosity. By obtaining as many people as possible to point their cameras around the world and chat with Gemini, Google obtains a flood of training data and user comments in real time.
There is already a small group of people wearing glasses powered by Astra worldwide. The equipment would use micro-LED screens to project legends in one eye and provide audio via tiny directional speakers near the temples. Compared to the clumsy science fiction visor of the original glass, it looks like a step forward.
Of course, there are problems of intimacy, latency, the age of the battery and the question not so small to know if society is ready for people who walk with semi-abroad glasses without mocking without mercy.
Whether Google may or may not make this magic ethical, non -invasive and elegant to go the dominant current is always in the air. But this feeling of 2025 as the smart glasses of the year become more precise seem more precise than ever.




