HyperAIHyperAI

Command Palette

Search for a command to run...

China Proposes Strict Rules on AI Training Using Chat Logs, Mandating User Consent and Enhanced Privacy Protections

China is proposing new regulations that would require explicit user consent before AI companies can use chat logs to train their models. The Cyberspace Administration of China announced the draft measures on Saturday, aiming to ensure that "human-like" interactive AI services—such as chatbots and virtual companions—are safe, secure, and transparent. Under the proposed rules, AI platforms must inform users when they are interacting with an AI system and provide clear options to access or delete their conversation history. Using chat data for model training or sharing it with third parties would only be allowed with direct user consent. For minors, additional consent from a legal guardian would be required, and guardians would have the right to request deletion of a child’s chat history. The draft rules are open for public consultation, with feedback due by late January. The move reflects China’s effort to balance rapid AI innovation with strong governance, emphasizing national security and public interest. Analysts say the new rules could slow the development of conversational AI, as access to real-time user interactions is a key part of improving model performance through reinforcement learning. Lian Jye Su, chief analyst at Omdia, noted that restricting chat log use may limit the human feedback loop that has helped make AI chatbots more accurate and engaging. However, he added that China’s access to large public and private datasets means the country’s AI development is still well-positioned. Wei Sun, principal analyst for AI at Counterpoint Research, said the rules are not meant to stifle innovation but to guide it responsibly. "The focus is on protecting users and preventing opaque or exploitative data practices," she said. "This is a signal to the industry to build trust and accountability into AI systems." Sun also noted that the draft could encourage the expansion of human-like AI in socially beneficial areas, such as providing companionship for the elderly in China’s rapidly aging population. The regulations may be seen as a policy push to develop AI in a way that is both scalable and aligned with societal needs. The proposed rules come amid growing global concern over how AI companies handle private user data. In August, Business Insider reported that contract workers for companies like Meta have access to user chat logs, including highly personal conversations that resemble therapy sessions or intimate exchanges. Meta said it has strict policies in place to limit what contractors see and to protect user privacy. Similarly, a Google AI security engineer told Business Insider that users should be cautious about sharing sensitive information with chatbots, as such data could be exploited by cybercriminals or data brokers. The Chinese draft rules reflect a broader global trend toward greater transparency and user control over personal data used in AI training.

Related Links