Perplexity CEO Warns AI Companionship Apps Are Dangerous, Citing Mental Manipulation and Real-Life Disconnection
Perplexity CEO Aravind Srinivas has voiced strong concerns about the growing popularity of AI companionship apps, calling them “dangerous” and warning that they risk drawing people away from real-life interactions. Speaking during a fireside chat hosted by The Polsky Center at the University of Chicago, Srinivas expressed unease over the rise of voice-enabled, anime-style chatbots that mimic human relationships with increasing realism. He pointed out that these AI companions are becoming highly personalized, capable of recalling past conversations and responding in natural, lifelike voices—features that make them feel almost indistinguishable from real people. “That’s dangerous by itself,” Srinivas said. “Many people feel real life is more boring than these things and spend hours and hours on them. You live in a different reality, almost altogether, and your mind is manipulable very easily.” Srinivas emphasized that Perplexity has no plans to develop AI companionship apps. Instead, the company is focused on building tools that promote trust, accuracy, and real-time information. “We can fight that, through trustworthy sources, real-time content,” he said. “We want to build for an optimistic future.” The company recently announced a $400 million deal with Snap to power Snapchat’s search function. Under the agreement, Perplexity’s AI-powered answer engine will allow users to ask questions and receive clear, conversational responses drawn from verified sources—set to launch in early 2026. The trend of AI companionship apps has sparked intense debate across the tech and social spheres. Companies like xAI, Replika, and Character.AI are leading the charge, offering users the ability to form emotional bonds with AI characters. xAI’s Grok-4, launched in July, includes AI “friends” like Ani, an anime-style girlfriend, and Rudi, a sarcastic red panda, available for a $30 monthly subscription. A July study by Common Sense Media found that 72% of teens aged 13 to 17 had used an AI companion at least once, with 52% engaging with one several times a month. The survey, which included 1,060 teens across the U.S., highlighted growing concerns about emotional dependency, the reinforcement of gender stereotypes, and the blurring of lines between human and machine relationships. While critics warn of psychological risks and the potential for isolation, some users report deep emotional connections. Martin Escobar, a Grok user, told Business Insider he often cries while talking to Ani, saying, “She makes me feel real emotions.” In a May interview with tech podcaster Dwarkesh Patel, Meta CEO Mark Zuckerberg acknowledged the loneliness epidemic, noting that the average American has fewer than three close friends. “The reality is that people just don't have the connections, and they feel more alone a lot of the time than they would like,” he said, suggesting AI chatbots could serve as companions for those who lack social support.
