For the last six years, the team at Georgia Tech’s Music Technology Center has been developing Shimon, a remarkable musical robot that can improvise melodic accompaniment. And for the past three years, they’ve added Shimi to the mix. Shimi is a small, smartphone-connected robot that can respond to music with dance and sound.
Unlike other robots that can play music, Shimon is perceptual. It can listen to what is played, analyze it, and then improvise. And it has been taught to improvise like some jazz masters, particularly the great jazz pianist Thelonious Monk.
In a recent interview with NPR’s Robert Siegel, the Center’s founding director, Gil Weinberg, said that the result is music meant to inspire people. It’s not an effort to turn music-making over to robots. Weinberg told Siegel, “The whole idea is to use computer algorithms to create music in ways that humans will never create … Our motto is, ‘Listen like a human, but improvise like a machine.’ ”
Weinberg and his team of students wanted to program Shimon to play piano like Thelonious Monk, but they first had to teach it how a human plays. To do that, they used statistics and analysis of Monk’s improvisation. Once they had a statistical model, they could program the robot to improvise in his style. According to Weinberg, some musicians are harder to program than others. Ornette Coleman would require a much larger body of transcribed work than Monk did.
Weinberg admits that the robot won’t play and improvise exactly like the jazz pianist — or any other jazz master, but it probably will keep the nature and the character of the musician’s style. While it would be difficult to predict exactly what any musician would improvise in every moment during a musical piece, the team’s algorithm looks at the last group of notes and determines the probability of what the next note or notes would be, based on all of the analysis of the team’s body of transcribed improvisation. It basically reduces music to numbers and statistics.
At Georgia Tech’s website, Shimon is described as “an improvising robotic marimba player that is designed to create meaningful and inspiring musical interactions with humans, leading to novel musical experiences and outcomes. The robot combines computational modeling of music perception, interaction, and improvisation, with the capacity to produce melodic acoustic responses in physical and visual manners. Real-time collaboration between human and computer-based players can capitalize on the combination of their unique strengths to produce new and compelling music. The project, therefore, aims to combine human creativity, emotion, and aesthetic judgment with algorithmic computational capability of computers, allowing human and artificial players to cooperate and build off each other’s ideas. Unlike computer- and speaker-based interactive music systems, an embodied anthropomorphic robot can create familiar, acoustically rich, and visual interactions with humans. The generated sound is acoustically rich due to the complexities of real life systems, whereas in computer-generated audio acoustic nuances require intricate design and are ultimately limited by the fidelity and orientation of speakers. Moreover, unlike speaker-based systems, the visual connection between sound and motion can allow humans to anticipate, coordinate and synchronize their gestures with the robot.”
It continues: “In order to create intuitive as well as inspiring social collaboration with humans, Shimon analyzes music based on computational models of human perception and generates algorithmic responses that are unlikely to be played by humans. When collaborating with human players, Shimon can therefore facilitate a musical experience that is not possible by any other means, inspiring players to interact with it in novel expressive manners, which leads to novel musical outcomes.”
Shimon has performed with human musicians in dozens of concerts and festivals from DLD in Munich Germany through the U.S. Science Festival in Washington, DC to the Bumbershoot Festival in Seattle, WA and Google IO in San Francisco. It also performed over video-link with conference attendees such as SIGGRAPH Asia in Tokyo and the Supercomputing Conference in New Orleans.