It is undeniable that Apple’s Siri’s digital chatbot did not have exactly a place of honor during the notification of this year WWDC 2025. Apple mentioned it and reiterated that it was taking more time than he had planned to bring everyone to bring everyone Siri that he had promised a year ago, saying that Apple’s complete integration would arrive “in the coming year”.
Apple has since confirmed that this means 2026.
I have my theories on the reason for the delay, most of which revolve around the tension between offering a rich experience of AI and the basic principles of Apple concerning privacy. They often seem to be crossed. This, however, is a conjecture. Only Apple can tell us exactly what’s going on – and now they have done it.
With Mark Spoonauer, the world editor-in-chief of the Tom guide, sat shortly after the Keynote with the senior vice-president of Apple software engineering, Craig Federighti and the Apple World Vice-President of Marketing Greg Joswiak for a large-scale podcast discussion on the practice of all that Apple 90 minutes.
We started by asking Federighi what Apple has delivered concerning Apple Intelligence, as well as Siri’s state, and what iPhone users could expect this year or this next one. Federighi was surprisingly transparent, offering a window on Apple’s strategic thought with regard to Apple Intelligence, Siri and AI.
Away
Federighi started by guiding us through everything Apple has delivered with Apple Intelligence so far, and, to be fair, it is a considerable amount
“We were very focused on creating a large platform for personal experiences really integrated into the operating system.” Recalled Federighi, referring to the original announcement of Apple Intelligence at WWDC 2024.
At the time, Apple demonstrated writing tools, summaries, notifications, cinema memories, semantic research from the photo library and clean the photos. He delivered all these features, but even if Apple built these tools, he recognized, Federighi told us, that “we could, on this basis of large language models on the device, a calculation of the private cloud as a basis for even more intelligence, [and] Semantic indexing on the device to recover Keep Knowledge, build better Siri. “”
Exaggerated?
A year ago, Apple’s confidence in its ability to build such Siri led it to demonstrate a platform that could manage an more conversational context, poor support, the type to Siri and a considerably redesigned user interface. Again, everything delivered Apple.
“We also talked about […] Things like being able to invoke a wider range of actions through your device by the intentions of applications orchestrated by Siri to let it do more things, “added Federighi.” We also talked about the possibility of using personal knowledge of this semantic index, so if you ask for things like “what is this podcast, that” Joz “sent me?” Whether we can find it, whether in your messages or in your email, and call it, then maybe even act using these application intentions.
This is known history. Apple has overpromised and underestimated, not having delivered a vaguely promised update of the Siri end of year apple in 2024 and admitting in spring 2025 that it would not be ready to soon. As for why it has happened, it has so far been a little mystery. Apple is not used to demonstrating the technology or the products that he does not know with certainty that he will be able to deliver on time.
Federighi, however, explained in detail where things turned and how Apple progresses from here.
“We found that when we developed this feature that we really had two phases, two versions of the ultimate architecture that we were going to create,” he said. “The first version that we had worked here at the time when we got closer to the conference, and at the time, a great confidence that we could deliver it. We thought in December, and if not, we understood by Spring, until we announced it in the WWDC. Because we knew that the world wanted a really complete image of” “what is the thought of Apple on the implications of Apple Go? ” “”
A story of two architectures
As Apple worked on a V1 of Siri architecture, he also worked on what Federighi called V2, “an architecture from start to finish that we knew was finally what we wanted to create, to reach a full set of capacities that we wanted for Siri.”
What everyone saw during the WWDC 2024 was videos of this V1 architecture, and it was the foundation of work that seriously started after the revelation of WWDC 2024, in preparation for the full launch of Siri of Apple Intelligence.
“We have settled for months, which works better and better in the intentions of applications, better and better to do the search,” added Federighi. “But fundamentally, we found that the limitations of V1 architecture did not bring us to the quality level of which we knew our customers and we were waiting for. We realized that the V1 architecture, you know, we could push and push and push and put more time, but if we try to push this in the state in which it was going to be in V2 architecture.
“As soon as we did this, and it was in the spring, we said to the world that we were not going to be able to publish it, and we were going to continue to really work towards new architecture and to publish something.”
We realized that […] If we try to repel this in the state he was going to be in it, that would not meet the expectations of our customers or to Apple standards, and that we had to go to V2 architecture.
Craig Federighi, Apple
This switch, however, and what Apple learned along the way, meant that Apple would no longer make the same mistake, and promises a new Siri for a date that she could not guarantee. Instead. Apple does not “preclude a date,” said Federighi “, until we have internally, the V2 architecture delivering not only in a form that we can demonstrate for all of you …”
He then joked by saying that, while, in fact, he “could” demonstrate a functional V2 model, he was not going to do it. Then he added, more seriously, “we have, you know, the V2 architecture, of course, working internally, but we are not yet to the point where it offers to the level of quality which, I think, in fact an excellent apple feature, and therefore we do not announce the date to do it.
I asked Federighi if, by architecture V2, he spoke of a wholesale reconstruction of Siri, but Federighi disillusioned with this notion.
“I should say that architecture V2 is not, it was not a star. The V1 architecture was in a way half of the V2 architecture, and now we extend it through, making a pure architecture that extends over all the Siri experience. So we have built a lot at the end of the end of the end of the end of the quality and the end of the superior quality.
A different AI strategy
Some may see Apple’s non-compliance to deliver the full SIRI on its original schedule as a strategic trip. But Apple’s approach to AI and product is also completely different from that of Openai or Google Gemini. It does not revolve around a singular product or a powerful chatbot. Siri is not necessarily the centerpiece that we have all imagined.
Federighi does not dispute that “AI is this transformational technology […] Everything that develops from this architecture will have an impact of several decades in industry and economy, and a bit like the Internet, a bit like mobility, and this will affect Apple products and that will affect experiences that are well outside Apple products. “”
Apple clearly wants to be part of this revolution, but in its terms and in a way that benefits its users most while protecting their privacy. Siri, however, has never been the end game, as Federighi explained.
AI is this processing technology […] And it will affect Apple products and it will affect experiences that are well outside Apple products. “”
Craig Federighi, Apple
“When we started with Apple Intelligence, we were very clear: it was not a question of simply building a chatbot. So, apparently, when some of these Siri capacities that I mentioned did not show up, people were like,” what happened, Apple? I thought you were going to give us your chatbot. It was never the goal, and it is not our main objective. “
So what is the goal? I think it can be quite obvious from the opening of WWDC 2025. Apple intends to integrate Apple’s intelligence on all its platforms. Instead of heading to a singular application like Chatgpt for your AI needs, Apple puts it, in a way, everywhere. It’s done, “explains Federighi:” In a way that meets you where you are, not that you go to a cat experience to get things done. “
Apple includes the appeal of conversational robots. “I know that many people find it a really powerful way to bring together their thoughts, think about […] So, of course, these are great things, “says Federighi.” Are they the most important thing for Apple to develop? Well, time will tell us where we are going there, but this is not the main thing that we have stopped doing for the moment. “”
See below for the full interview with Federighi and Joswiak.

To watch