Google Photos proved you can add AI with care and compromise


AI has taken control of everything in technology. Whether research, social media or essentially all other applications on your phone, AI is just everywhere. It requires your attention and is often not optional, but Google Photos has just presented an increasingly rare example to build AI features with a feeling of compromise and take care of the end user.


This 9TO5GOOGLE Weekender issue is part of the restarting newsletter of 9TO5GOOGLE which highlights the biggest Google stories with additional comments and other songs. Register here To have it delivered early in your reception box!


Google Photos technically invoked AI, he did it before “AI” became a fashionable word. Automatic learning is what allowed the photos to recognize the faces of your images, to search for objects and scenes in an image or a video, and more. The application that we all have to love concerns AI from the first day.

But the recent perfusion of Gemini AI in Google Photos is what seemed almost too much.

Advertising – Scroll for more content

“Ask Photos” made its debut last year as an IA improvement to search, a major challenge since it was already good. Google’s promise was to provide a natural language research that could dive more deep than the existing solution, answering questions through the use of your photo library. A good idea, but which did not really work at the beginning. Ask the photos was very slow and heavy to use, to the point where a “classic research” shortcut was necessary just to get things done in many cases. Again, the idea was good, but the execution left a lot of room for improvement.

In many cases, a deployment of features like this would have just been left. It’s AI after all, so people surely want it, right? It is the attitude which, unfortunately, seems predominant today.

But Google didn’t do that.

On the contrary, Google has quietly put the deployment, Admits directly This requires the photos “is not where he must be”. The company has since introduced a new version of the functionality which deals with complaints such as speed, while finding a stellar compromise on functionality.

The update version of Ask Photos meets in the middle of “doing a whole IA chatbot” and an experience that many have learned to love. When you are looking for, Google begins to reveal the traditional results almost immediately. Then, once these initial results, you will see a “reflection” status which indicates that AI works in the background to develop research. Once finished, you will get all the relevant answers you may have asked. So, if I am looking for “whatever I had for dinner last Saturday”, the photos just show me these images immediately, SO Give me a breakdown of the meal AI, with the possibility of following this message for more information.

This alone is a great compromise. It offers AI’s features with the speed of normal search. Great!

What makes it better is that photos recognize when AI is not necessary. If you are looking for something normal, like the “mountains”, he just finds relevant images, then uses gemini to try (without success in my tests so far) to highlight the “best correspondence”. No chatbot nonsense, just the photos you wanted to find.

I think this experience is only a masterclass in the start of new AI features, but with real treatments for user experience And A desire to compromise on all this. Google leaves all this by default, but it actually looks like a global positive for experience rather than non-nonsense of AI.

Maybe the best of everyone, I can deactivate everything if I want it. Thanks to a menu of updated settings, request photos can be completely extinguished and, when it is, I simply get the classic research experience. It is sad to see how refreshing it is, because it should be the default value on any product that rushed to add IA features.

All this is like the full opposite of the strategy that many others have used. For example, Meta mainly pushes AI in front of all its users, with invasive Meta IA integrations on Facebook and Instagram, and the application previously used only to manage its popular Ray-Ban intelligent glasses which are now transformed to be almost entirely on AI. There is no care for the user, it is a question of advancing AI. Google was also guilty of being aggressive about AI – IA glimps not being optional come to my mind – but society has really shown in the photos that there is a better way.

What do you think? Did Google hit a good compromise? Let’s talk!


The best stories of this week

The next launch of the Samsung galaxy is July 9

Earlier this week, Samsung officially confirmed rumors. The next Galaxy unpacked event takes place on July 9 with the Galaxy Z Fold 7 being the star star, and the leaks revealed this week a lot of what is in store:

Samsung has Reservations now open for the Galaxy Z Fold 7 and Flip 7, with a $ 50 credit When you pre-order up to $ 1,150 and more savings. Reservations are free and there is no obligation to buy a device if you register, but there is no other time Get this $ 50 credit (which can go to Galaxy Watch 8), 3x reward points and other advantages.

The phone nothing (3) made a lot of choice

Meanwhile, the Nothing (3) phone also makes noise before launch on July 1st next week. The leaks revealed the… interesting design, and nothing has also confirmed some key details.

More higher stories

https://www.youtube.com/watch?v=US7G7HB7FN8


The rest of 9TO5

9TO5MAC: Here is everything that comes to the approach of music in iOS 26

9TO5TOYS: Another replenishment of the Switch 2 console officially announced to Best Buy for next week, here are the details

Electrek: EV batteries with semiconductors with a range of +1,800 miles? It seems too beautiful to be true


Follow Ben: Twitter / X,, Threads,, BlueskyAnd Instagram

FTC: We use automatic income affiliation links. More.



Leave a Reply

Your email address will not be published. Required fields are marked *