Study Ranks Le Chat, ChatGPT, and Grok as Most Privacy-Friendly Generative AI Services
A new report from data removal service Incogni examines the data privacy practices of nine popular generative AI services, ranking them based on 11 criteria. The study, titled "Gen AI and LLM Data Privacy Ranking 2025," aims to highlight the best and worst offenders in terms of user data privacy and transparency. Among the services evaluated were Mistral AI's Le Chat, OpenAI's ChatGPT, xAI's Grok, Anthropic's Claude, Inflection AI's Pi, DeepSeek, Microsoft Copilot, Google Gemini, and Meta AI. Each platform was assessed on factors such as the type of data used for training, the clarity of user data usage, the ability to remove personal information, and the readability of privacy policies. Top Performers: 1. Le Chat by Mistral AI emerged as the most privacy-friendly AI service. It excelled in transparency and provided clear ways for users to control their data usage, although it lost some points for minor transparency issues. 2. ChatGPT by OpenAI ranked second, praised for its clear and understandable privacy policy and the ability to opt out of having personal data used for training models. However, there were concerns about how OpenAI trains its models and manages user data interactions. 3. Grok by xAI came in third, performing well in most areas but lagging slightly in the readability of its privacy policy. 4. Claude by Anthropic and Pi by Inflection AI followed, both of which had strong overall privacy practices but faced specific challenges in certain areas. Bottom Performers: 1. DeepSeek was the sixth-ranked service, with notable privacy issues. 2. Copilot by Microsoft ranked seventh and was criticized for its data collection and sharing practices, particularly with third-party advertisers. 3. Gemini by Google secured the eighth position, showing poor performance in allowing users to opt out of data usage for training. 4. Meta AI was rated the least privacy-friendly, scoring the lowest in overall data collection and sharing practices. The report highlighted significant differences in privacy practices across platforms, especially regarding the use of user data for model training. Some services, like ChatGPT, Copilot, Mistral AI, and Grok, allow users to opt out of having their prompts used for training. In contrast, platforms like Gemini, DeepSeek, Pi AI, and Meta AI do not offer this option, according to their privacy policies. Incogni also noted that many AI companies share user data with various third parties, including service providers, law enforcement, research partners, and affiliate companies. For instance, Microsoft's privacy policy suggests that user prompts may be shared with third parties for online advertising. Similarly, DeepSeek and Meta AI's policies indicate that prompts can be shared with companies within their corporate groups. Transparency and the readability of privacy policies were key factors in the evaluation. A well-written, easily accessible support section that addresses user privacy questions was found to significantly enhance transparency. However, large tech companies like Microsoft, Meta, and Google often use extensive, all-encompassing privacy policies that cover multiple products, making it difficult for users to find specific information about data handling in generative AI services. Industry Insider Evaluation: The findings of Incogni’s report underscore the urgent need for enhanced data privacy practices in the rapidly evolving field of generative AI. With growing public concern over data misuse and privacy violations, AI companies must prioritize transparency and user control to maintain trust and compliance. Mistral AI's leading position demonstrates that it is possible to develop robust AI services while respecting user privacy. Conversely, the underperformance of Meta AI and other major tech giants highlights significant room for improvement in their privacy policies and practices. As these companies continue to integrate AI into their products, addressing these issues will be crucial for their long-term success and ethical standing.