Apple’s upcoming Siri upgrade may be powered by Google’s Gemini AI
Apple is reportedly moving forward with a major overhaul of its Siri voice assistant, integrating an AI-powered web search tool that could rely on technology from Google’s Gemini model, according to a new report by Bloomberg’s Mark Gurman. The update, which has been delayed until 2026, marks Apple’s latest effort to catch up in the fast-evolving AI race, where competitors like OpenAI, Google, and Perplexity have already launched advanced AI-powered search and chat capabilities. The new Siri will feature a function internally dubbed “World Knowledge Answers,” designed to deliver AI-generated summaries of web-based information in response to user queries. The feature aims to provide richer, more contextual results than traditional search, combining text, photos, videos, and local points of interest. It will also be capable of accessing users’ personal data and interacting with on-screen content, enabling Siri to perform complex tasks based on context. Apple’s plan involves a hybrid approach: using its own AI models to search through user data and device content, while testing Google’s Gemini model for generating web-based summaries. The two companies have reportedly reached a formal agreement allowing Apple to test the Google AI model, potentially running it on Apple’s own servers for privacy and security. This collaboration could extend beyond Siri to other iPhone features, including Safari and Spotlight search. Spotlight, Apple’s built-in search tool, has long been positioned as a competitor to Google, offering quick answers to queries about celebrities, TV shows, and other pop culture topics. However, with the rise of AI chatbots, users now expect deeper, more dynamic responses across a broader range of subjects. Apple’s updated search function aims to close that gap by delivering comprehensive, multimodal answers powered by AI. The revamped Siri will operate through a three-part system: a planner to interpret voice or text prompts, a search engine to gather data from the web or the user’s device, and a summarizer to condense the results into a clear, concise response. This architecture reflects Apple’s push to make Siri more proactive and intelligent, capable of handling complex requests and navigating the user’s digital environment. Despite its late start, Apple is reportedly evaluating multiple AI models for different components of the system. While it plans to use its own AI for personal data access, it is still assessing both Google’s Gemini and Anthropic’s Claude for the planning function. The decision will likely hinge on performance, privacy, and integration with Apple’s ecosystem. The AI-enhanced Siri is expected to launch alongside iOS 26.4, potentially as early as March 2025, well after the debut of the iPhone 17 lineup. The delay underscores Apple’s commitment to delivering a polished, competitive product rather than rushing a subpar update. The move signals a significant shift for Apple, which has historically kept its AI ambitions in-house and avoided partnerships with rivals. By working with Google, Apple acknowledges the limitations of its current AI capabilities and is prioritizing performance and user experience over strict independence. While some critics may view the collaboration as a concession to Google’s lead in AI, it could ultimately strengthen Apple’s position by delivering a more capable, responsive, and intelligent assistant. The success of the new Siri will depend on how well it balances innovation, privacy, and seamless integration across Apple’s devices. If executed well, the AI-powered search feature could redefine how users interact with their iPhones and set a new standard for voice assistants in the age of generative AI.