SoundCloud updates policies but denies AI training on user content.
On February 7, SoundCloud silently updated its service terms, adding a clause that permits the platform to use user-uploaded content to "inform, train, or develop" artificial intelligence (AI) technologies or services. This means all audio materials on SoundCloud can be utilized for AI training, barring those under third-party copyright agreements, such as with major music publishers like Universal Music Group and Warner Music Group. Before this update, SoundCloud collaborated with nearly a dozen suppliers last year to introduce AI-based creation tools. These tools include features for remixing, generating vocals, and creating custom samples. The platform claimed these enhancements were designed to boost creators' productivity and efficiency while ensuring proper acknowledgment and compensation for rights holders. SoundCloud emphasized ethical and transparent AI practices, but the latest terms change does not include an explicit opt-out option for users who do not want their content used for AI training. This shift aligns with broader trends in the tech industry where content hosting and social media platforms are revising policies to allow first- and third-party entities to use platform data for AI training. For instance, in October, Twitter (now X) updated its privacy policy, enabling external companies to utilize user content for AI training. Similarly, LinkedIn modified its rules in September to permit the extraction of user data for AI research, and YouTube announced this December that third parties could use user-uploaded clips for AI model training, provided they obtain necessary permissions. Such policy changes have sparked extensive debate both domestically and internationally. Many users argue for a voluntary participation model rather than default consent. They express concern that personal creations might be used for commercial purposes without adequate authorization, potentially infringing on their rights and depriving them of recognition and economic benefits. The lack of clear communication from platforms about these updates has further fueled skepticism and discontent among creators. From an industry perspective, SoundCloud’s move reflects the prevailing trend of digital platforms embracing AI applications, yet it highlights significant gaps in protecting user rights. As a leading independent music-sharing platform with over 180 million monthly active users and a rich repository of emerging artists, SoundCloud's policy adjustments can significantly impact the music industry. Experts recommend that the company should transparently define the scope and conditions of data usage, along with mechanisms for fair compensation, to foster a healthy creative ecosystem and competitive environment. When TechCrunch reported the update, questions arose about whether SoundCloud had already begun using user-generated content for AI training. Marni Greenberg, SoundCloud's Senior Vice President and Head of Communications, clarified in a statement that while the platform currently uses AI for personalized recommendations and fraud detection, it has not yet implemented user content in AI training. However, SoundCloud remains open to the possibility in the future. Greenberg explained that the term update aims to clarify how content interacts with AI technologies within SoundCloud’s system. If specific mechanisms for selective participation are introduced, SoundCloud intends to inform users appropriately. Despite these assurances, many users reported not receiving notification emails about the important update. Ed Newton-Rex, a technology ethicist, pointed out on social media that he found no email alerts regarding the changes, even though he actively contributes to the platform. The absence of direct communication has drawn criticism regarding SoundCloud’s transparency. While the user agreement states that the platform will provide “prominent notice” for significant revisions, this does not equate to individual email notifications. This oversight has raised doubts about SoundCloud’s commitment to maintaining the transparency it professes. The integration of AI into digital content platforms poses a critical challenge: balancing technological innovation with the protection of creators’ rights. Although SoundCloud has not directly used user content for AI training, the recent policy change underscores the platform's shortcomings in addressing creator concerns. Transparency in communication and offering comprehensive opt-out options are crucial steps for platforms like SoundCloud to take in order to ensure that creators feel valued and protected. SoundCloud’s complete statement reads: “Our Terms of Service update is intended to clarify how content may interact with AI technology, particularly within the SoundCloud platform. We currently use AI for personalized recommendations and fraud detection, and we expect future use cases to be similar. Should we introduce specific mechanisms for selective participation, we will communicate these options to our users.” In conclusion, SoundCloud’s decision to update its service terms to include AI training provisions, without providing explicit opt-out options, highlights the ongoing tension between advanced technological capabilities and the ethical treatment of user-generated content. Industry experts urge platforms to prioritize clear communication and user choice to maintain a positive and fair relationship with creators. SoundCloud, known for its vast user base and innovative tools, stands at a pivotal moment where addressing these concerns can strengthen community trust and innovation in the long run.