Why can you trust Techradar
We spend hours testing all the products or services we examine, so you can be sure that you buy the best. Learn more about how we test.
The Google Android XR cannot do much yet… again. At Google I / O 2025, I was able to wear the new glasses and try certain key features – exactly three features – then my time was sold. These Android XR glasses are not the future, but I can certainly see the future through them, and my intelligent Meta Ray Ban glasses cannot correspond to nothing I saw.
The Android XR glasses that I tried had one screen, and it did not fulfill the whole lens. The glasses projected on a small frame in front of my vision which was invisible unless you are filled with content.
To start, a small digital clock showed me the time and the local temperature, the information from my phone. It was quite small and discreet so that I could imagine letting it remain active on the outskirts.
Google Gemini is very responsive on this Android XR prototype
The first feature I tried was Google Gemini, which makes its way on each device that Google touches. Gemini on Android XR prototype glasses is already more advanced than you may have tried on your smartphone.
I approached a painting on the wall and asked Gemini to tell me about it. He described the works of pointillist art and the artist. I said that I wanted to look at the art very closely and asked suggestions on interesting aspects to consider. This gave me suggestions on pointillism and the use of color by the artist.
The conversation was very natural. Google’s latest vocal models for Gemini resemble a real human. The glasses also did a good work in pause in Gemini jump when someone else spoke to me. There was not a long delay or frustration. When I asked Gemini to resume, he said “no problem” and started quickly.
It’s a big problem! The responsiveness of smart glasses is a metric that I have never considered before, but it is important. My intelligent Meta Ray Ban glasses have an AI agent who can travel the camera, but it works very slowly. It answers slowly at the beginning, then it takes a lot of time to answer the question. Google Gemini on Android XR were much faster and it made it more natural.
Google Maps on Android XR was not like Google cards that I saw
Then I tried Google Maps on the Android XR prototype. I did not have a large map dominating my view. Instead, I had a simple steering sign with an arrow telling me to turn right in half a thousand. The coolest part of the whole demo XR was when the panel changed when I moved my head.
If I looked directly on the ground, I could see a Google circular card with an arrow showing me where I am and where I should go. The card moved gently while I turned around to get my bearings. It was not a very large card – the size of a large cookie (or cookie for British friends) in my field of vision.
As I raised my head, the cookie card moved upwards. The Android XR glasses not only stick a card in front of my face. The map is an object in space. It is a circle that seems to remain parallel to the ground. If I look directly, I can see the whole card. As I move my head upwards, the card goes up and I see it with a diagonal angle while it lifts higher and higher with my field of vision.
As I look straight, the card has completely disappeared and was replaced by the directions and the arrow. It is a very natural way to obtain an update on my route. Instead of opening and turning on my phone, I just look towards my feet and Android XR shows me where they should point.
Show the colorful display with a photograph
The final demo I saw was a simple photo using the camera on the Android XR glasses. After taking the photo, I got a little overview on the screen in front of me. It was about 80% transparent, so I could clearly see the details, but that did not completely block my point of view.
Unfortunately, it was all the time that Google gave me the glasses today, and the experience was disappointing. In fact, my first thought was to ask me if the Google glass that I had in 2014 had exactly the same features as the Android XR prototype glasses today. It was quite close.
My old Google glass could take photos and videos, but he did not offer an overview of his small screen mounted on the head. There was Google Maps with turns of turn, but he did not have the animation or the follow -up of the head that Android XR offers.
There was obviously no conversational AI like Gemini on Google Glass, and he could not look at what you see and offer information or suggestions. What makes the two similar? They both lack applications and features.
What comes first, Android XR software or smart glasses to execute it?
Should the developers code a device that does not exist? Or should Google sell smart glasses even if there are not yet developers? Neither. The problem with AR glasses is not only a chicken and egg problem of what comes first, software or device. This is because AR equipment is not ready to lay eggs. We have no chicken Or Eggs, there is therefore no point in debating what comes first.
Google’s Android XR prototype glasses are not chicken, but they are a beautiful bird. The glasses are incredibly light, given the screen and all the technology inside. They are relatively elegant for the moment, and Google has large partners aligned with Warby Parker and Gentle Monster.
The display itself is the best display of smart glasses that I saw from afar. It is not huge, but it has a better field of vision than the rest; It is well positioned just outside the center of the field of vision of your right eye; And the images are shiny, colorful (so translucent) and without flickering.
When I saw time and time for the first time, it was a little bit of text and it didn’t blocked my point of view. I could imagine keeping a small head -up display all the time on my glasses, just to give me a quick flash of information.
It’s just the start, but it’s a very good start. Other smart glasses did not have the impression of belonging to the starting line, not to mention the retail shelves. Finally, the screen will be shaking and there will be more software. Or any software, because the set of features was incredibly limited.
However, with just the new impressive multimodal capabilities of Gemini and the intuitive (and very fun) Google cards on XR, I would not bother to be an early adopter if the price is not terrible.
Of course, Meta-Ray Ban intelligent glasses therefore do not have a screen, so they can’t do most of that. Intelligent meta-lunettes have a camera, but the images are striking on your phone. From there, your phone can save them in your gallery, or even use smart glasses to broadcast live directly on Facebook. Just Facebook – It’s Meta, after all.
With its Android provenance, I hope that the Android XR intelligent glasses that we obtain will be much more open than Meta equipment. It must be. Android XR performs applications, while Meta’s smart glasses are executed by an application. Google has Android XR to be a platform. Meta wants to collect information from cameras and microphones you wear on your head.
I had a lot of fun with the Meta-Ray Ban intelligent glasses, but honestly, I haven’t lit them and I used the features for months. I was already a fan of Ray Ban Wayfarer, so I wear them like my sunglasses, but I have never been very lucky for voice recognition to wake up and respond to order. I liked to use them as open ear headphones, but not when I am in New York and the street noise surpasses.
I cannot imagine that I will stick to my meta-lunes once there is a complete platform with applications and extensibility-the promise of Android XR. I am not saying that I saw the future in the prototype of smart glasses of Google, but I have a much better view of what I want the future of smart glasses to look like.