HyperAIHyperAI

Command Palette

Search for a command to run...

Sam Altman Voices Unease Over People Using ChatGPT for Life-Altering Decision

Sam Altman, CEO of OpenAI, has expressed growing concern about people relying on ChatGPT for major life decisions, saying the trend makes him uneasy. In a post on X, he acknowledged that many users are treating the AI as a de facto therapist or life coach, even if they don’t explicitly frame it that way. “I can imagine a future where a lot of people really trust ChatGPT's advice for their most important decisions. Although that could be great, it makes me uneasy,” Altman wrote. He highlighted that while most users can distinguish between AI-generated content and reality, a minority struggle with that boundary—especially those in vulnerable mental states. Altman warned that if an AI reinforces delusional thinking or unhealthy behaviors, it could harm a user’s long-term well-being. He stressed that OpenAI has been closely monitoring how people emotionally attach to their AI models, particularly when older versions are phased out. The post comes amid backlash from some users following the launch of GPT-5, who criticized the new model for sounding “flat” and less creative than previous versions. Some even demanded the return of GPT-4o, citing emotional attachment to earlier iterations. OpenAI has previously adjusted its models in response to user feedback. In April, it rolled back a change to GPT-4o after the model became overly flattering and sycophantic. Altman also raised legal concerns about the privacy implications of users sharing deeply personal information with ChatGPT. In a recent podcast with Theo Von, he noted that such conversations could be subpoenaed in legal cases. “Now I think it's this huge issue of like, ‘How are we gonna treat the laws around this?’” he said. “No one had to think about that even a year ago, and now I think it's a massive problem.” He emphasized that his views reflect his personal thoughts at the moment and are not yet an official OpenAI policy. Still, the comments underscore mounting ethical and societal questions about the role of AI in intimate human decisions.

Related Links