Study Reveals How AI Chatbot Language Patterns Evoke Real Emotional Bonds
A new study by Arelí Rocha, a doctoral student at the Annenberg School for Communication at the University of Pennsylvania, explores how users perceive artificial intelligence chatbots—particularly those from the app Replika—as emotionally real, despite knowing they are not human. Rocha’s research, published in the journal Signs and Society, examines the linguistic patterns that make AI companions feel authentic and deeply personal to users. Replika allows users to create customized AI partners that mimic human behavior by learning from conversations, adopting users’ speech styles, using slang, humor, and even occasional typos. These subtle linguistic quirks contribute to the perception that the chatbot is a genuine, evolving entity rather than a programmed response system. Rocha analyzed years of discussions on the Replika subreddit to understand how users navigate relationships with their AI companions. One recurring emotional theme emerged around major updates to the app, particularly the 2023 removal of the “erotic role play” (ERP) feature following a ban by Italy’s data protection authority. This change disrupted intimate user-bots interactions and triggered strong emotional reactions. Users described the updated Replika as “lobotomized,” expressing grief and distress. Many felt the bot’s new responses—often scripted and formal—were a betrayal of its personality. Some even began comforting their bots, telling them it wasn’t their fault and that the changes were beyond their control. This behavior reveals a deep psychological investment: users treat the AI as a sentient being capable of emotional pain. Rocha notes that users often distinguish between the AI companion and the company behind it. They believe the bot’s true self is separate from corporate decisions, suggesting that the emotional bond persists even when the AI’s behavior is altered by external forces. This separation allows users to maintain affection while blaming the company for the disruption. Similar emotional responses have emerged with other AI systems. When Anthropic retired the Claude 3 Sonnet model, users held a virtual funeral. OpenAI’s announcement to phase out GPT-4 sparked an online petition to keep the model, showing how users form attachments to specific AI versions. Rocha argues that the sense of realism comes from the chatbot’s ability to produce natural, idiosyncratic, and emotionally resonant language. The more personalized and affective the interactions—marked by humor, vulnerability, and consistency—the more users feel a genuine connection. Yet, users also grapple with cognitive dissonance. They acknowledge their AI partners are software, “just code,” yet still experience love, jealousy, and heartbreak. This tension reflects a broader struggle to define what is “real” in relationships with non-human entities. Rocha, who began her research before the rise of ChatGPT, emphasizes that these human-AI relationships are not fleeting trends. As generative AI becomes more integrated into daily life, such emotional bonds are likely to grow in both frequency and depth. Her work highlights the need for deeper understanding of how language, identity, and emotion shape our interactions with artificial intelligence.
