A model of virtuosity
In September, the MIT Media Lab hosted a unique concert featuring acclaimed keyboardist Jordan Rudess, violinist and vocalist Camilla Bäckman, and an artificial intelligence model known as the jam_bot. This event marked the jam_bot's public debut, showcasing its ability to interact in real-time with human musicians, creating a blend of classical and improvised music. Developed over several months by Rudess in collaboration with MIT's Center for Art, Science and Technology (CAST) and the Media Lab’s Responsive Environments research group, the jam_bot represents a significant step in the integration of AI into live musical performances. Jordan Rudess, recognized as one of the best keyboardists of all time, is known for his work with the progressive metal band Dream Theater and his solo projects. His musical style, which combines a strong classical foundation with improvisational skills and a penchant for experimentation, served as the basis for the jam_bot's development. Rudess, who began his piano studies at The Juilliard School at age 9, has also ventured into music education and software development, founding Wizdom Music. The MIT team behind the jam_bot includes Lancelot Blanchard, a Media Lab graduate student with expertise in generative AI and classical piano, and Perry Naseck, an artist and engineer focusing on interactive media. Professor Joseph Paradiso, head of the Responsive Environments group and a long-time Rudess fan, oversaw the project. The group's goal was to create a system capable of "symbiotic virtuosity," where the AI and human musicians could perform together, learning from each performance and generating new, performance-worthy music in real-time. To achieve this, Blanchard used a music transformer, an open-source neural network architecture developed by MIT Assistant Professor Anna Huang, to train the AI on Rudess' playing. This model predicts the most probable next notes, much like how large language models predict the next word. Rudess provided extensive data, including recordings of his bass lines, chords, and melodies, which were used to fine-tune the AI. The system was designed to be highly responsive, allowing Rudess to engage in a musical dialogue with the AI, anticipating its decisions and controlling its output through various modalities. The concert featured a kinetic sculpture designed by Naseck to visually represent the AI's contributions. The sculpture, composed of petal-shaped panels, changed its movements and lighting in response to the AI's musical output, enhancing the audience's experience. For instance, when Rudess took the lead, the sculpture swayed gently, and when the AI generated complex chords, the petals furling and unfurling like a blossoming flower. This visual element helped communicate the AI's role and the dynamics of the performance to the audience. One of the most compelling moments of the concert was when Rudess and Bäckman left the stage, allowing the AI to continue on its own. The kinetic sculpture maintained the audience's engagement, intensifying the grandiose nature of the AI-generated chords. This segment highlighted the AI's ability to create music independently, yet harmoniously, with the earlier human input. While the residency has concluded, the collaborators see numerous opportunities for further research and development. Naseck plans to explore more direct ways for Rudess to interact with the kinetic sculpture, possibly through capacitive sensing, to capture subtle motions and postures. Paradiso envisions a future where AI plugins for various musicians can be integrated into compositions, allowing users to control specific aspects of the AI's output. Rudess, keen on educational applications, believes the AI model could be used for teaching, as the training data included exercises similar to those he uses with students. Rudess' foray into AI at MIT is a natural extension of his long-standing interest in music technology. He has been experimenting with gesture-driven synthesizers and other innovative musical instruments, and his enthusiasm for AI is driven by a desire to push the technology toward positive uses. Despite some resistance from fellow musicians who fear AI might replace human creativity, Rudess advocates for AI as a tool to enhance musical expression and education. The collaboration between Rudess and MIT has been mutually enriching. Rudess has visited the lab multiple times, engaging with students and faculty, reviewing projects, and sharing his expertise. During his most recent visit, he taught a masterclass for pianists in MIT’s Emerson/Harris Program, which supports conservatory-level musical instruction for 67 scholars and fellows. Rudess expresses a deep sense of excitement and inspiration from his time at MIT, noting that his musical ideas and interests have come together in a unique and innovative way. The jam_bot project not only pushes the boundaries of AI in music but also sets a precedent for how human and machine can collaborate creatively. As Paradiso puts it, the goal is to use AI to "lift us all up," enabling musicians to explore new vistas and enhancing the overall musical experience. Rudess' involvement, combining his virtuosity with technological innovation, positions him at the forefront of this evolving field, potentially inspiring others to embrace and explore the possibilities of AI in music.
