Apple would have given Siri an upgrade of the broad language model (LLM) in a year or two. Boost is expected to make the virtual assistant of the iPhone more conversational and equip it with a wider range of global knowledge. What most users do not know is that the beta iOS 26 version already has a hidden chatbot that testers can try right away.
For reasons that I will clarify later, iOS 26 offers neither a dedicated application for the Chatbot LLM of Apple, nor does it in the Siri experience by default. I came across the veiled interface while exploring the Apple update shortcut application. To try it, you will have to build your own shortcut using the iOS 26 beta developer.
Before you start …
Before diving into Apple’s Chatbot AI and its abilities, there are a few questions that you should keep in mind:
- When you build the chatbot via shortcuts, you can choose between the Apple disk model, the private cloud calculation and the OpenAi Chatppt (GPT-4 variant with real-time results).
- The date of cutting knowledge of the calculation models on devices and private is in October 2023, so none has access to live web results or recently updated information.
- Apple’s models claim that they understand English, Spanish, French, German, Chinese (Mandarin), Japanese, Korean, Italian, Portuguese, Russian, Arabic, Hindu, Dutch, Turkish and Malay – but they are apparently not reliable in several of these languages.
- The Chatbot will avoid discussing illegal activities, hatred speeches, violence, self -control, sexual content, identifiable personal information, illegal drug use and political extremism.
- I tested the chatbot for about a week on an iPhone 16 pro max running iOS 26 Developer Beta 1.
- The features that I am about to decompose are generally available on any iPhone, iPad or Mac compatible with the Apple Intelligence during execution 26.
Chatbot configuration
Like any shortcut, there is not a single way to build the Ai chatbot. You can be creative and customize it so that it works in the way you expect it. The main action that you will need to incorporate is the new use model for use in the Apple Intelligence menu found in the shortcut creation flow, which is apparently compatible with the input and text outlet.
When selecting the model, I advise you to choose the disc option. Choosing Chatgpt does not make sense, because Openai already offers native and web chatbots that work more reliable than a shortcut. Likewise, beyond privacy, I don’t see any reason to use Apple’s private cloud calculation, because online services like Chatgpt and Google Gemini are in Milles.
Apple
The main edge of the use of the chatbot on Apple disk is that it offers offline access and requires no additional download (assuming that you already use Apple Intelligence). If you are connected to the Internet, you better use one of the online third -party chatbots renowned for your daily questions.
If you prefer an approach in vocal mode, you can add an action that converts your speech into text, then feeds the text released to the model. You can also have the vocal text action read aloud.
My ideal configuration is that the shortcut has a text box. Once I type my request, a dedicated action explicitly asks the LLM to maintain the brevity before feeding my text, to avoid unnecessarily long responses. I also activated the follow -up of the USE Model Action action, because it allows me to ask additional questions while maintaining the context and the history of the cat for a single session.
To reproduce my configuration, follow these steps:
- Launch it Shortcut Application on iOS 26 beta.
- Press the Plus (+) button in the upper right corner to create a new shortcut.
- Search and add the Text action.
- Faucet Text in the Text Action and choose Ask each time.
- Search and add the Use the model action. Choose it On arrangement option.
- Faucet Request in the Use the model Action, type Briefly process the following request:Then add it Text Variable from the semi-automatic entry line just above the keyboard.
- Press the right arrow (>) on the Use the model action and activate the Follow up to fall over.
- Leave the shortcut to save it.
When your shortcut is ready, you can trigger it in several ways, including a personalized voice command, a double backdrop, a projectors search, an action button, etc. If you have activated iCloud synchronization, you can use the same shortcut on all your compatible iPads, iPads and Mac.
Apple
Test it
To find out how reliable the chatbot AI is reliable, I asked it one of the most perplexed questions in humanity: how many Rs are there in the word strawberry? The chatbot, using the LLM on the devices, replied correctly with three each time. Curiously, by opting for the allegedly higher private cloud calculation option, he claimed and stubbornly stated that there were only two RS. Then, the real tests followed, all using the model on disk.
I asked questions about the offline chatbot on the kitchen; As for how long to boil an egg or cook minced meat in a pressure cooker. The results were mainly precise and informative. He can also provide lists of ingredients and instructions for famous recipes, but I would not necessarily trust him if I had guests. When asked if the pineapple continued on the pizza, he refused to indicate the only good answer and insisted that it was a question of taste – to avoid offending certain users. Disappointing.
Continuing, I nourished the basic mathematical equations of the chatbot and that has resolved them correctly. It is also aware and follows the PEMDAS rule, so you don’t need to insert parentheses to multiply it before adding.
When asked to compare the features sets offered by WhatsApp and Telegram, he provided a well -formulated list by decomposing the main options. However, most of the reported information (with confidence) was incorrect. In addition, for any reason, the chatbot sometimes responded randomly in German even when my requests were explicitly sent to American English.
Apple
Speaking of languages, while the chatbot claims that it supports Arabic and the Turk, he did not have significant conversations in these languages. That’s a lot of things, but most of the answers include unrelevant words or sentences. I do not speak the rest of the sustained languages to test how much he knows them, but I suppose that it is only competent in English.
I then went to religious questions, which did not always succeed either. For example, I asked him for the difference and overlap between kosher and halal foods according to Jewish and Islamic lessons, and his response was inaccurate. He is aware of these food laws in the concept, but he cannot compare them correctly or explain their directives.
When asked to generate an original quote that comes to mind, he said the following: “In the silent dance between the echoes of our past and the whispers of our potential future, we find the deep truth that every moment is both a reflection of who we have become and a canvas on which we paint the essence of our being.” Quite touching, if you ask me.
To test his reasoning capacities, I asked him when we can expect iOS 26 – noting that iOS 18 launched in 2024. Given the date of cutting of knowledge of October 2023, the reasonable response would have been in 2032 (eight years after 2024). Meanwhile, he replied with: “If iOS 18 is published in 2024, we can deduce that Apple generally releases new iOS versions per year. Curiously, that has obtained the answer – not because it can predict the future, but because its reasoning skills are poor. For what it is worth, he also thinks that iOS 27 launched in 2025 for the same reason.
I continued to test his knowledge of a wide range of subjects. For example, it can list the symptoms of common health problems, but certainly do not rely on it (or any AI chatbot, really) for medical advice. Surprisingly, he was also able to properly inform me which metro line to take to obtain from point (popular) to point B in Istanbul – specific stations and everything. On the contrary, he failed to provide an Apple OS technical support, such as how to hide a photo on iOS. The other failures wrongly include that Americans cannot obtain a visa on arrival in Lebanon and that Mexican citizens do not need a visa to enter legally in the United States.
Why is the Apple chatbot ai hidden
Apple’s LLM in the shortcut application is not the cat cat, but it is not completely useless either. He apparently uses the same power writing tools of the same model and the summary features on iOS. If you feed him a large wall of text and ask him to paraphrase it or rewrite it, this will be reliably. But why would you do this when the function of native writing tools offers a superior UI / UX?
Mainly because, as demonstrated above, the chatbot propelled by the Apple is subject to hallucinations and often gives bad confident answers. Of course, this answers many questions correctly, but it maintains the same confident tone during disinformation. So you can’t really say it unless you already know the requested answer, which goes against. In Apple’s defense, all the answers indicate that you should check the errors.
All this will probably change when iOS 26 will arrive and certainly evolve with the Siri capabilities of Apple Intelligence. Apple did not give any indication during the WWDC that he provided a chatbot located in the context of the features of Apple Intelligence of iOS 26, so it will probably not reach the level of Chatgpt or Gemini for the moment. But you can try it if you wish, which is as close to a demo as we will get it.