OpenAI Warns Developers of Phishing Risks After Analytics Partner Mixpanel Breach Exposes User Data
OpenAI has warned developers to remain vigilant after a recent security breach at Mixpanel, a web analytics platform it partners with, resulted in the potential exposure of limited user data. The company confirmed that its own systems were not compromised and that users of ChatGPT were not affected. The incident, which occurred earlier this month, involved a security breach at Mixpanel, a San Francisco-based company with over 11,000 corporate clients. OpenAI said the stolen data may have included "limited analytics data" such as names, email addresses, and approximate locations for some users of its developer platform, particularly those who use its API services. OpenAI emphasized that no passwords, payment information, chat history, or API request data were exposed. The company stated that its internal systems remained secure and that the breach was isolated to Mixpanel’s infrastructure. Despite the data’s relatively low sensitivity, cybersecurity experts warn it could still be used in targeted phishing campaigns. Jake Moore, global cybersecurity advisor at ESET, noted that while the information itself is not highly sensitive, it could be combined with other data to create convincing fraudulent messages. “This kind of data is often used to craft highly personalized and believable scams,” Moore said. Mixpanel confirmed the breach originated from a "smishing" attack—where hackers use deceptive text messages to trick individuals into revealing credentials or installing malicious software—detected on November 8. The company has since engaged law enforcement and is reaching out to all affected customers. A Mixpanel spokesperson directed reporters to a statement from CEO Jen Taylor, who confirmed the company is actively investigating and responding to the incident. The number of individuals impacted has not been disclosed. The breach adds to a growing list of cyber threats targeting OpenAI, which has become a prime target due to its rapid rise and the high value of its technology. Last year, The New York Times reported that a hacker infiltrated OpenAI’s internal messaging systems and accessed sensitive data on advanced AI research. In June 2024, a former OpenAI researcher claimed he was dismissed after raising concerns about the company’s security practices and the risk of foreign espionage, particularly from China. OpenAI did not respond to a request for comment sent outside regular business hours. The company continues to advise developers to treat any unexpected messages with caution and to verify the authenticity of communications, especially those that appear to come from trusted sources.
