Home » Leading Edge Robotics News » Robotic Art Project Tries to Learn from Audience

Robotic Art Project Tries to Learn from Audience

Axel Tidemann, their robot [self.], and Øyvind Brandtsegg. (Photo credit: Ole Morten Melgård, NTNU.)
Axel Tidemann, their robot [self.], and Øyvind Brandtsegg. (Photo credit: Ole Morten Melgård, NTNU.)

There have been numerous attempts to create a robot that learns the same way as a child. What makes a new project out of the Norwegian University of Science and Technology (NTNU) different, is that one of the primary researchers is a professor of music.

“We’re still pretty far away from accurately modelling all aspects of a living child’s brain, but the algorithms that handle sound and image processing are inspired by biology,” says Øyvind Brandtsegg, a music professor at NTNU. “We’ve given it almost no pre-defined knowledge on purpose.”

Brandtsegg will be working with postdoc Axel Tidemann, who works in the Department of Computer and Information Science. The complexity of the project, the researchers say, requires such a multidisciplinary team. “We understand just enough of each other’s fields of study to see what is difficult, and why,” Brandtsegg says.

Fortunately, their skillsets also overlap a bit: Brandtsegg is an accomplished programmer, and uses this knowledge to make music. Tideman, meanwhile, made a drumming robot for his doctoral project.

Their new project consists of a robot that initially knows very little and must learn using sound sensors and a vision system.

The robot picks a sound that the person appears to be emphasizing, and responds by playing other sounds that it associates with this, while projecting a neural representation of its association between the sound and pictures. It doesn’t show a video, but rather how its ‘brain’ connects sounds and images.

The robot has already been on display in Trondheim and Arendal, where visitors were able to affect its learning. Interacting with a diverse audience allowed the researchers to see exactly how it learns. A lot of people said things along the lines of, “My name is…” and “What is your name?” Some people sang, and others read poems.

This resulted in a period where a lot of similar sounds and connected people got mixed up. Fortunately, the robot gradually absorbed more and more impressions of different people. Certain people, like guides, affected it more, because it interacted with them them often. The robot also learned to filter input.

If a word is said in a certain way five times, and then in a different way once, it learned to filter away the standout and concentrate on the most common way, which is presumably correct. This processing happens during the robot’s downtime.

“We say that the machine ‘dreams’ at night,” Brandtsegg says.

After a while, the robot was able to connect words and pictures together in a more complex manner. The researchers hope their art project will spark conversations about artificial intelligence.

“What is independent thinking? What is artificial life? These are the big questions,” Tidemann says. “But we believe that the right way to reach for the ‘holy grail’ of AI is to implement biologically inspired models in a machine, let it operate in a physical environment and see if we can observe intelligent behaviour.”

Leave a Reply

Your email address will not be published. Required fields are marked *