HyperAI
Back to Headlines

ChatGPT therapy sessions could be exposed in lawsuits, says OpenAI CEO Sam Altman

2 days ago

OpenAI CEO Sam Altman has warned that therapy-style conversations with ChatGPT may not remain private in legal disputes, highlighting a growing concern about the lack of confidentiality protections for AI interactions. During a recent podcast appearance with Theo Von, Altman emphasized that while users often share sensitive information with the AI, such exchanges do not enjoy the same legal safeguards as conversations with licensed therapists, lawyers, or doctors. “If you talk to ChatGPT about your most personal issues and then there’s a lawsuit, we could be forced to disclose that information,” he said, calling the situation “very screwed up.” Altman pointed out that current legal frameworks, such as doctor-patient confidentiality and attorney-client privilege, protect private communications, but similar safeguards have yet to be established for AI interactions. “We need to address this with urgency,” he added, advocating for the same privacy standards to apply to AI conversations as to human professionals. The issue has gained traction as more users, particularly younger individuals, turn to ChatGPT for emotional support, life coaching, and relationship advice. “A year ago, no one was thinking about this, but now it’s a major question: How do we apply existing laws to AI?” Altman said. Unlike encrypted messaging platforms like WhatsApp or Signal, OpenAI retains access to user chats with ChatGPT. The company uses these interactions to refine its AI models and monitor for misuse, a practice outlined in its data retention policies. According to OpenAI, deleted chats on free, plus, and pro tiers are erased permanently after 30 days unless required for legal or security purposes. However, this has sparked controversy. In June, The New York Times and other news organizations filed a court order demanding OpenAI retain all user logs, including deleted chats, indefinitely as part of a broader copyright lawsuit. OpenAI is contesting the request, arguing that indefinite data retention conflicts with privacy principles. The debate underscores the tension between AI development and user privacy. While OpenAI maintains that it does not store or use user data for commercial purposes beyond improving its models, the potential for legal disclosure remains a critical risk. Altman acknowledged that the company is still grappling with how to balance innovation with ethical responsibilities, noting that the issue is “a huge challenge” for regulators and tech firms alike. Beyond privacy concerns, Altman also expressed worries about the psychological effects of social media on children, sharing that he recently became a father and is increasingly focused on the long-term impacts of digital platforms. His remarks reflect broader anxieties about AI’s role in society, as its integration into personal and professional spaces raises complex questions about accountability, transparency, and the need for updated legal protections. The situation highlights the urgent need for clearer regulations around AI interactions, especially as the technology becomes more central to daily life. Without legal frameworks mirroring those for human professionals, users may face unintended consequences when sharing private information with AI systems. OpenAI’s response to these challenges will likely shape how the industry navigates the intersection of innovation and privacy in the years to come.

Related Links