ProducerAI Joins Google Labs, Leveraging Lyria 3 to Transform Music Creation with AI-Powered Collaboration
ProducerAI, a generative AI music tool backed by The Chainsmokers, has officially joined Google Labs. The platform enables users to generate music by typing natural language prompts such as “make a lofi beat,” leveraging Google DeepMind’s Lyria 3 model, which can transform text and even images into audio. Google announced the integration on Tuesday, highlighting that ProducerAI offers a more interactive experience than typical AI tools. Rather than simply generating music with a single click, users can treat the AI as a collaborative partner. Elias Roman, Senior Director of Product Management at Google Labs, described the tool as enabling creative exploration—experimenting with genre fusions, crafting personalized songs for loved ones, and designing custom workout tracks. The Lyria 3 model, which powers ProducerAI, was recently introduced into the Gemini app. However, ProducerAI provides a more immersive environment for musicians and creators to refine and curate AI-generated content. Jeff Chang, Director of Product Management at Google DeepMind, emphasized the importance of human input, describing the process as one of careful selection and refinement: “It’s not just clicking a button a hundred times and being done. It’s a thoughtful curation process.” Notably, Grammy-winning artist Wyclef Jean used Lyria 3 and Google’s Music AI Sandbox in the creation of his recent track “Back From Abu Dhabi.” He shared how he was able to instantly test the addition of a flute to an existing recording, showcasing the tool’s real-time creative potential. Jean stressed the unique role of human creativity: “You’re in the era where the human has to be the most creative. There’s one thing that you have over AI: a soul. And there’s one thing that AI has over you: infinite information.” Despite these advancements, the use of AI in music remains controversial. Many artists, including Billie Eilish, Katy Perry, and Jon Bon Jovi, signed an open letter in 2024 urging tech companies to respect human creativity and avoid using copyrighted material without consent. A group of music publishers has also sued Anthropic for $3 billion, alleging the company illegally scraped over 20,000 copyrighted songs for training its AI—adding to a broader legal debate. Meanwhile, some musicians have embraced AI as a tool for enhancement. Paul McCartney used AI-powered noise reduction technology—similar to that used in video calls—to restore a decades-old John Lennon demo, resulting in the 2025 Grammy-winning Beatles track “Now and Then.” Other tools like Suno have produced AI-generated songs that chart on Spotify and Billboard. Telisha Jones, a Mississippi-based creator, used Suno to turn her poetry into the viral R&B hit “How Was I Supposed To Know,” securing a reported $3 million record deal with Hallwood Media. The legal landscape remains uncertain. A federal judge, William Alsup, ruled that training AI on copyrighted data is legal, though pirating such data is not. As AI continues to reshape music creation, the balance between innovation and intellectual property rights remains a central challenge.
