SoundCloud Updates Terms to Permit AI Training on User-Uploaded Audio Content
SoundCloud has recently updated its terms of use to permit the company to train artificial intelligence (AI) on content that users upload to its platform. Tech ethicist Ed-Newton Rex first highlighted this change, noting that the new terms grant SoundCloud extensive rights over user-uploaded material for AI purposes. The updated terms, last revised on February 7, state that users explicitly agree to their content being used to “inform, train, develop, or serve as input to artificial intelligence or machine intelligence technologies or services.” This provision applies to all content not covered by separate agreements with third-party rightsholders, such as record labels. SoundCloud currently has licensing deals with both independent and major music publishers, including Universal Music and Warner Music Group. Despite the addition of this clause, SoundCloud has not provided users with a clear opt-out mechanism. A TechCrunch investigation found no explicit option to opt out in the platform's settings menu on the web. SoundCloud did not immediately respond to a request for comment on this matter. The shift towards AI integration is part of a broader trend among large content and social media platforms. SoundCloud has been steadily embracing AI to enhance user experience and functionality. In the past year, the company has partnered with nearly a dozen vendors to introduce AI-powered tools for remixing, generating vocals, and creating custom samples. These collaborations were announced in a blog post last fall, where SoundCloud assured its partners that content ID solutions would be implemented to “ensure rights holders receive proper credit and compensation.” The company also committed to maintaining “ethical and transparent AI practices that respect creators’ rights.” However, SoundCloud’s move mirrors those of other major platforms, which have similarly updated their policies to facilitate AI training. In October, X (formerly Twitter), led by Elon Musk, revised its privacy policy to allow external companies to use user posts for AI training. LinkedIn made a similar amendment in September, allowing the scraping of user data for training purposes. More recently, in December, YouTube began permitting third parties to use user clips for AI training. These changes have sparked significant criticism from users. Many argue that AI training policies should require explicit opt-in consent, rather than the current opt-out approach. There are also concerns about fair compensation and recognition for creators whose content becomes part of AI training datasets. As the use of AI in content creation and analysis continues to grow, these debates highlight the evolving challenges and ethical considerations in the tech industry.