Mishaal Rahman / Android Authority
While annual telephone outings have become more iterative for major brands, there is a new area of expansion which is strongly marketed – AI generating features. Although AI’s features have long been important for certain smartphone functions, the generative AI is the new border that has dominated software discussion points for many products.
But a large part of the media can also be a white noise for me, because I rarely use generative AI features on my smartphone. In fact, I actively avoid this type of AI every time I can.
Do you use AI generative features on your smartphone?
8 votes
1. I like to keep my photos realistic
Megan Ellis / Android authority
Although free chatbots have been the main face of generative AI tools, another place where its prevail is in the image editing and generation features. Tools like the magic editor of Google Photos and the image editing of Samsung Galaxy AI are based on a generative AI to produce modifications such as the widening of the background of an image or the deletion of objects.
At a time when it is increasingly difficult to say what is real online, I prefer to keep my own images anchored in reality.
I can see how some of these features could be useful. But at a time when it is increasingly difficult to say what is real online, I prefer to keep my own images anchored in reality. I will modify the contrast, sharpness and saturation in my favorite photo retouching application Snapseed – but I think that generating the reality is too bad.
This despite a Google One subscription, which gives me unlimited access to Gen Ai tools in the photos. I also have a Samsung device that allows AI changes, but I just don’t use it. As these tools have become more advanced, my initial curiosity has been replaced by an apprehension.
2. Vocal models leave aside many accents and languages
Megan Ellis / Android authority
Although I prefer not to use most generative AI tools, this does not mean that there is not the occasional useful functionality which seems promising. But in these cases, there are often drawbacks. For example, the live translation propelled by AI seems to be generally useful in cases where you speak with someone who speaks another language.
Many people will never have access to translation features for a language with which they interact.
But the reality is that a functionality like this has its limits. The largest limitation, in my opinion, is the lack of support for many languages and specific accents. This limitation means that many people will never have access to translation features for a language with which they interact. When a language is supported, the regional accent may not be. This then affects the accuracy of transcriptions and translations.
For example, the only local South African language that Samsung Galaxy Ai supports is English. Local languages like Zulu and Afrikaans are not supported. For English language packs, AI supports Australian, Indian, the United Kingdom and the United States. This means that AI may not be able to interpret the accent of South African English.
In fact, when I tried the transcription function of Samsung for my own vocal recording, he not only marked the transcription as having two speakers, but she also misinterpreted most of what I said despite that I clearly speak.
3. The inaccurate results reduce my confidence in AI models
Megan Ellis / Android authority
In addition to inaccurate translations and transcriptions, the generative AI can also hallucinate outright – provide incorrect information even when it understands what you say or write. This is not limited to AI on my phone. The previews of Google AI are not available on my mobile browser (which I prefer), but my experience with the search summaries generated by AI on my PC embittered my experience.
There are some of the best known errors than Google’s previews committed, in particular by telling people to put glue on the pizza. These are often attributed to sarcastic responses on sources of training such as Reddit that the LLM supplying AI cannot separate from the fact.
According to my own personal experience, I was looking for if chronic migraines were one of the minimum advantages prescribed for South African medical aid. When I searched the term “South Africa migraines”, Google provided me with a summary saying that the condition must be covered by medical aid in the country. The reality is that they don’t do it. I know because I read the long PMB document provided by the medical program council and confirmed with my neurologist.
These types of hallucinations have reduced my confidence in AI models in general.
When I clicked on the source that IA’s overviews provided as a quote, the supposed information cited was not found. These types of hallucinations have reduced my confidence in AI models in general. I would not trust most of them to transcribe conversations, summarize information or cite information correctly; So I jump the tools when I can.
4. Gemini looks like a gradient
Megan Ellis / Android authority
When I talk about the features of the AI that I love, Google Assistant was one of them. Indeed, you can use certain commands to configure routines and access certain functions. I always have a daily meteorological notification that works on all my phones that I created in Google Assistant years ago.
But if you use a recent version of Android, you will have noticed that Google has started to push the Gemini as an assistant replacement. If you have made the change, as I did, you can be disappointed with the features that Gemini provides.
Orders working with Google Assistant are not necessarily supported by Gemini.
Orders working with Google Assistant are not necessarily supported by Gemini. Gemini’s feature has developed since its launch on mobile, but at the beginning, I had trouble making AI to define a timer. The definition of a task or a reminder also now requires the integration of the workspace, which Google Assistant has not done.
I also tried to see if I can recreate my daily weather updates. The implementation of weather notifications on Google Assistant was transparent when I did it for the first time. But when I asked Gemini to “send me a daily meteorological update for the forecasts of tomorrow at 7 pm”, the AI created a task called “daily weather update for tomorrow” in Google tasks and the Google calendar, scheduled for 7 p.m. every day.
I also find that requests take longer to treat in Gemini. And as certain control features are no longer supported, instead of performing the function you wish, Gemini produces a long response based on search results. When I ask Gemini to “see my routines” or “see my subscriptions” (the order to modify your daily weather updates), he does a Google search instead.
You can always return to Google Assistant, but that doesn’t work as well as before. I also noticed that the possibility of simply holding in the home button to speak an order no longer works, which forced me to say “Hey Google” every time I want it to record a command rather than a simple Google search.
5. Many AI services are only temporarily free
C. Scott Brown / Android Authority
Even if I found generative AI features on my useful smartphone, I would not want to depend on them because of the way many companies plan to monetize their AI services. This has already happened with many generative AI chatbots, which limit features depending on whether you are freely or not. Some services force you to buy credits.
Even if Galaxy AI is at the heart of Samsung’s marketing of its recent flagship products, Samsung noted that these Galaxy AI will remain free until the end of 2025. He did not say that it will start to charge Galaxy AI features, but I don’t really want to seize the opportunity.
I prefer not to become dependent on free features, only to limit them to the future.
I will not pay for AI’s features, even if it means that certain tasks will take longer. I also prefer not to become dependent on free features, only to limit them or get the carpet out of me in the future.
I do not think that smartphone companies will stop making the functionality of the AI generators on the aircraft. But the implementation of these features and my experience with them only harden my skepticism.
I want to see more useful more useful AI features that actually work for everyday users-non-subcite and over-choux features that provide inaccurate results or blurred reality.