Google astounds audience with demonstration of innovative new glasses, rendering spectators speechless
        After years of silence, Google is back with a new take on smart glasses, this time with a more realistic and connected approach. Their live demo at I/O 2025 combined cutting-edge technology, collaboration with Xreal, and the power of their Gemini assistant. The result was a practical experience with lights, shadows… and an unexpected setback that revealed both the advances and challenges of the project.
### An Unexpected Comeback with a Familiar Name
Project Aura is back on the scene. This time, not as a failed experiment, but as a more connected and focused mixed reality platform. The new glasses, developed in collaboration with Xreal, operate on Android XR and integrate the assistant, allowing users to interact with their environment using only their voice.
During the presentation, Google made it clear that they aim to offer a natural, real-time experience without the need to take out the phone or use hands. The glasses detect the environment, project a real-time interface, and respond to the user through spoken commands, making them an “invisible extension” of the phone with much more contextual autonomy.
The highlight of the event was a live demonstration, no editing or tricks, which is unusual for this type of launch. Shahram Izadi, responsible for the devices area, invited the public to see the prototype in action in the hands of Nishtha Bhatia, who appeared remotely to operate them.
The glasses allowed reading messages, playing music, getting directions, and even checking information about a painting with just a question. Everything in real time. However, there was a slight delay in Gemini’s response, probably due to connection issues, and in the end, a technical error forced the simultaneous translation test between Hindi and Farsi to be interrupted.
### Will the third time be the charm for Google?
After the Google Glass and other failed attempts, the company seems to have learned that the future of smart glasses is not in the spectacular, but in the useful. This time they are not promising an abstract revolution, but rather small actions that simplify life: reading, translating, searching, interacting… without screens or gestures. Just by looking and speaking.
The project is still in development, but the direction is clear. If it manages to overcome the technical errors and refine the experience, it could have not only a novel tool, but a new standard for everyday augmented reality.
