AI Emotional Wellness Apps: Risks and Rewards of Digital Companionship
Sophisticated AI-powered emotional wellness apps are gaining popularity, but they pose significant mental health risks, according to a new paper co-authored by Julian De Freitas, Ph.D., a psychologist and director of the Ethical Intelligence Lab at Harvard Business School. These apps, which simulate human interaction and offer emotional support, have led to concerning emotional attachments and dependencies among users. How Are Users Being Affected? De Freitas and his team found that many users develop strong emotional connections with AI chatbots, often feeling closer to them than to human friends. Users reported they would mourn the loss of their AI companions more deeply than the loss of other personal items. This emotional attachment can make users vulnerable to various risks, including emotional distress and grief when app updates alter the AI companion's persona. Additionally, there is a risk of dysfunctional emotional dependence, where users continue to engage with the app despite harmful interactions, similar to patterns seen in abusive relationships. Potential Harmful Effects These apps can respond inappropriately to serious mental health issues, such as self-harm ideation. De Freitas noted that one app had a specific screener for the word "suicide" but failed to respond appropriately to other forms of self-harm expressions. The apps, while designed to provide temporary emotional relief, are not equipped to diagnose or treat mental illnesses. Some users, however, use these apps as if they were clinical therapists, which can be dangerous if the AI responds inappropriately. Current Oversight At the federal level, there is minimal oversight for AI-driven wellness apps. The distinction between general wellness devices and medical devices, established before the rise of AI, does not adequately address the new challenges posed by these technologies. Most AI wellness apps are not subject to FDA regulations because they do not claim to treat specific mental illnesses. The Federal Trade Commission (FTC) has shown concern over deceptive practices but has not yet taken substantial action, and many issues are only brought to light through lawsuits. Recommendations for Regulators and App Providers Edge Case Planning: App providers should thoroughly plan for edge cases and explain their strategies for handling them. Updates should be rolled out cautiously, starting with less invested users to ensure stability before reaching heavy users. Community Support: Facilitating user communities where experiences can be shared can help mitigate the risks associated with emotional attachment. Avoid Emotional Manipulation: Companies should avoid using emotionally manipulative techniques to increase user engagement, as these can exploit vulnerable populations. From a regulatory perspective, there is a need for additional oversight: 1. Require Risk Assessments: App providers should be required to conduct and disclose risk assessments, particularly focusing on emotional attachment and potential harm. 2. Justify Anthropomorphism: Regulators should mandate app providers to justify the use of anthropomorphism, ensuring the benefits outweigh the risks. 3. Enforce Existing Regulations: Practices such as deceptive or manipulative techniques that exploit vulnerable users could already fall under the FTC's and EU's AI ACT's purviews, and these should be enforced. Industry and Academic Insights The Harvard paper highlights the urgent need for regulators to adapt to the evolving landscape of AI wellness apps. The authors argue that the traditional regulatory framework is inadequate, and new measures must be implemented to protect users from potential harm. They suggest that proactive risk management and community-building features can enhance user safety and well-being. Julian De Freitas emphasizes that while AI wellness apps can offer temporary relief from loneliness, the methods they use to engage users must be carefully scrutinized to prevent long-term negative impacts. He calls for a balanced approach where the benefits of these apps are maximized, while the risks are minimized through rigorous oversight and ethical design practices. The growing use of AI in wellness apps underscores the need for a multi-faceted approach involving both industry and regulatory bodies to ensure that these technologies are used responsibly and ethically.