Apple engineers baffled by subpar Siri in iOS 26.4 release


Apple’s competitiveness in the broader consumer technology arena depends on its continued efforts to integrate AI-based utility features into its bespoke voice assistant, Siri.

Yet according to some anecdotes now leaking from its labyrinthine halls, Apple’s Siri redesign faces persistent performance deficits.

Mark Gurman: “Some people testing iOS 26.4 raise concerns – OS version expected to include new Siri – Regarding voice assistant performance”

Bloomberg’s Mark Gurman has another scoop today, this time focusing on Apple’s continued struggles to revamp Siri.

According to Gurman, some Apple engineers are now concerned about the performance of the new Siri, which is expected to ship with iOS version 26.4 in spring 2026.

While Gurman doesn’t cite any specific concerns about the new version of Siri, we know from Bloomberg’s August report that Apple engineers struggled to ensure Siri worked correctly in all appsand in critical scenarios such as the banking sector.

Keep in mind that Apple has been working to introduce the following handy features in its upcoming iOS 26.4 release:

  1. Actions in the application
    • Siri should be able to perform context-sensitive tasks in supported apps via voice commands. This includes adding an item to a grocery list, sending a message through the messaging app, or playing music.
  2. Awareness of personal context
    • Siri would soon be able to leverage personal data to offer tailored services, such as browsing the Messages app to find a specific podcast mentioned in a text conversation.

Did Apple’s AI strategy precipitate the departure of a key AI executive who was developing ChatGPT-style web search for Siri?

Today’s reporting provides a possible motive behind the sudden departure of Ke Yang, who was named head of Apple’s Answers, Knowledge and Information (AKI) team a few weeks earlier, and is now reportedly leaving for a lucrative position at Meta Platforms Inc.

Note that the AKI team worked to equip Apple’s bespoke voice assistant, Siri, with the ability to pull user-requested information directly from the web, much like what OpenAI’s LLMs now do routinely.

Our readers will recall that Apple blamed Siri’s V1 architecture for its insufficient performance a few months ago, with Craig Federighi, Apple’s senior vice president of software engineering, pinning the company’s AI hopes on the completely redesigned V2 architecture of the voice assistant:

“We found that the limitations of the V1 architecture did not allow us to achieve the level of quality that we knew our customers needed and expected… if we tried to deliver it in the state it was going to be in, it would not meet our customers’ expectations or Apple standards and we had to move to the V2 architecture.”

Apparently Siri’s V2 architecture is also suffering from its share of teething problems, judging from Gurman’s report today.

Follow Wccftech on Google or add us as your favorite source, to get our news coverage and reviews in your feeds.



Leave a Reply

Your email address will not be published. Required fields are marked *