Meta Launches Social AI Assistant with Voice and Image Generation Features, Integrates with Instagram and Facebook
Meta's new AI assistant, designed to compete with ChatGPT, offers a familiar set of features, but with a unique social twist. Users can engage with the assistant via text or voice, generate images, and receive real-time web results. However, the app's standout feature is the Discover feed, which integrates AI-generated content into a social media-like experience. This feed showcases interactions with Meta AI that other users, including your friends on Instagram and Facebook, have chosen to share. You can like, comment on, share, or remix these posts, making AI more accessible and understandable to everyone. According to Connor Hayes, Meta’s VP of product, this approach aims to demystify AI and demonstrate "what people can do with it." While it might seem natural for Meta to blend AI and social media, they are not alone in this trend. Elon Musk’s X has already integrated Grok, and OpenAI plans to add a social feed to ChatGPT. The convergence of AI chatbots and social platforms is becoming a significant industry movement. One of the standout features of the Meta AI app is the emphasis on voice mode. An optional beta version enhances the conversational quality of the AI, similar to ChatGPT’s advanced voice mode. This version, based on Meta’s “full-duplex” AI model research, enables smoother and more dynamic turn-taking, overlapping speech, and natural backchanneling. During a demonstration, the full-duplex mode was noticeably more engaging and personable compared to the standard voice mode. Initially, both versions will be available in the US, Canada, Australia, and New Zealand. To personalize user interactions, Meta AI utilizes data from Facebook and Instagram profiles in the US and Canada. This means your activity on these platforms will influence the responses you receive from the assistant. Additionally, you can instruct Meta AI to remember specific details about you, much like you can with ChatGPT. The app is built on a Meta-tuned version of Llama 4, a powerful AI model. Until now, most users have encountered Meta AI through its integration with Instagram, Facebook, and WhatsApp. These integrations have been widespread, and Meta reports that nearly one billion users have engaged with the chatbot through features like Instagram’s search bar. Despite this, Hayes believes a standalone app provides the most intuitive and direct way to interact with an AI assistant. Interestingly, the Meta AI app isn’t entirely new. It replaces the existing View companion app for the Meta Ray-Ban smart glasses. The new interface includes a dedicated tab where you can access the same information the previous app provided, such as a gallery of photos and videos you’ve taken. Meta's decision to merge the assistant with the companion app reflects their broader strategy, which involves developing both software and hardware over time. The Meta Ray-Ban glasses already use AI to recognize objects and provide real-time language translation. Later this year, Meta plans to release a more advanced version of the glasses with a small heads-up display, further enhancing the user experience with AI. Overall, Meta’s AI assistant combines the convenience of a standalone app with the social engagement of platforms like Instagram and Facebook, aiming to make AI technology more accessible and useful to a broad audience. As the tech industry continues to explore the intersection of AI and social media, Meta AI stands out as a pioneering example of this fusion.