HyperAIHyperAI

Command Palette

Search for a command to run...

OpenAI Battles NYT Over 20 Million User Chat Requests

We are standing firm in our commitment to user privacy, security, and trust—core principles that guide every product and decision we make at OpenAI. Every week, 800 million people use ChatGPT to think, learn, create, and manage deeply personal aspects of their lives. They entrust us with sensitive conversations, documents, credentials, memories, search histories, payment details, and even AI agents that act on their behalf. We treat this data as among the most private and important parts of your digital life, and we are building protections that match this responsibility. Today, that responsibility is under direct challenge. The New York Times has demanded that we hand over 20 million private ChatGPT conversations. They claim this data may contain examples of users attempting to bypass their paywall. This request ignores long-standing privacy norms, violates basic security principles, and forces us to surrender tens of millions of highly personal messages unrelated to their lawsuit against OpenAI. They’ve tried this before. Initially, they sought to remove your ability to delete private chats. We fought back—and restored your right to delete. Then they asked for 1.4 billion private conversations. We resisted—and we are resisting again. Your private conversations belong to you. They should never be used as collateral in disputes over online content access. We respect strong, independent journalism and collaborate with many publishers and newsrooms. Throughout history, the press has played a vital role in defending privacy rights around the world. But this request does not reflect that tradition. That’s why we are asking the court to dismiss it. We are accelerating our security and privacy roadmap to better protect your data. OpenAI is one of the most scrutinized organizations globally. We’ve invested heavily in systems designed to prevent unauthorized access—from organized crime groups to state-sponsored intelligence agencies. But if The New York Times’ request is granted, we would be forced to hand over the very data we are protecting—your data—to third parties, including The New York Times’ legal team and their paid consultants. Our long-term vision includes advanced privacy protections, such as client-side encryption for your ChatGPT messages. These features will ensure your conversations remain private—so private that even OpenAI cannot access them. We are also building fully automated systems to detect security issues within our products. Only severe abuse or serious risks—such as threats to life, harm to others, or cybersecurity breaches—would trigger human review by a small, rigorously vetted team. These protections are actively being developed, and we will share more details soon. As AI becomes more embedded in daily life, privacy and security must evolve to keep pace. We are committed to a future where you can trust that your most private AI conversations remain safe, reliable, and truly private. — Dane Stuckey, Chief Information Security Officer, OpenAI Why are The New York Times and other plaintiffs asking for this? The New York Times is suing OpenAI. As part of this baseless lawsuit, they are seeking a court order to compel us to hand over 20 million user conversations. This would grant them access to millions of private messages unrelated to their claims. We believe this is an overreach. It threatens user privacy without advancing the case. This is why we are fighting it. How did we get to this point? The New York Times’ legal team argued their request should be approved, citing a previous case where another AI company agreed to hand over 50 million private chat records in a different, unrelated lawsuit. We strongly disagree this precedent applies to our situation, and we will continue to challenge the request. Did we offer alternative solutions? Yes. We proposed several privacy-preserving options, including targeted sampling (e.g., searching for conversations that reference The New York Times’ content) and advanced data categorization to identify how users interacted with ChatGPT. These were rejected. Is The New York Times bound to keep the data confidential? Yes. Legally, The New York Times must not disclose any data outside the court process. That said, if they attempt to use or publicize the data in any way, we will do everything in our power to protect your privacy. Was the initial request even broader? Yes. The New York Times first asked for 1.4 billion ChatGPT conversations. We successfully challenged that request through legal channels. That should have been a red flag—this was not a carefully considered or necessary demand. How were the 20 million conversations selected? The data set consists of a random sample of 20 million consumer ChatGPT conversations from December 2022 to November 2024. Conversations outside this window are not affected. Could your data be impacted? Yes—this data includes a random sample of consumer ChatGPT conversations from December 2022 to November 2024. Data outside this window is not subject to this request. Are Business customers affected? No. This does not impact ChatGPT Enterprise, ChatGPT Edu, ChatGPT Business (formerly “Team”), or API users. How are your personal data and privacy protected? We are de-identifying all affected conversations, removing or “cleansing” personal identifiers (PII) and other sensitive information like passwords. We are also ensuring access is restricted to a secure, legally protected environment under strict agreements. How will the data be stored? The data covered by the court order is stored separately in a secure system under legal hold. It cannot be accessed or used for any purpose other than fulfilling legal obligations. Only a small, audited team within OpenAI’s legal and security teams may access it when necessary. Who will have access? The New York Times’ outside legal counsel and their technical consultants will be granted access. We are working to ensure they can only view the data within a strictly controlled, secure environment. What if The New York Times tries to publicly release the data? We will fight every step of the way to protect your privacy if they attempt to disclose or misuse the information. Does this court order violate GDPR or other privacy laws? We are taking steps to comply with the law, but the request contradicts our privacy standards. That’s why we are challenging it. As noted, we are implementing additional safeguards, including de-identification and PII removal. Can you keep us updated? Yes. We are committed to transparency and will provide meaningful updates as the situation evolves—especially if there are changes to the order or impacts on your data.

Related Links

OpenAI Battles NYT Over 20 Million User Chat Requests | Trending Stories | HyperAI