Author: James H. Phalen, MD
From the September/October Issue of Robot Magazine
by Valentino Braitenberg, The MIT Press, Cambridge, MA; available at www.amazon.com
One benefit of robotics being such a young field is that most of the gods of the robot pantheon are still alive. But what can quarter-century old pipe dreams of an octogenarian Italian neuroscientist working in Tübingen, Germany tell a twenty-first century American robotics hobbyist? A whole lot, actually! Published in 1986, I don’t recall when or how I first learned of it, but this little book made an indelible early impression on my understanding of robotics. His ideas influenced Mark Tilden, another god of our pantheon, who embodied them in his B.E.A.M. robots (learn more at Solarbotics, www.solarbotics.com) and the WowWee Robosapien. There is hardly a better introduction to the principles of robotic control by artificial neural networks than Vehicles. You only have to look at the work of Jeff Krichmar at UC Irvine to see the development of those ideas.
Although he talks about imaginary vehicles with theoretical circuits sometimes built of mythical materials, they are really metaphors to explain in the most basic terms the complex workings of the human brain: the neurophysiology and even neuropsychology with which he is familiar. We reverse engineer his metaphors into actual vehicles – our robots. The robotics hobbyist should have no trouble seeing where these ideas are going and envisioning practical circuits to get there, especially if you read my earlier article “BEAM Robot Neurosurgery” in this magazine, November 2009. He starts with the simplest of “vehicles,” a single motor powered proportionally to the input from a single sensor. If you have even high school biology you don’t need his lead to envision a flagellated organism in a pond powered by sunlight striking its chlorophyll. Add a second motor and sensor wired in parallel or cross-over and you’ve created a lightavoiding cockroach or flame-loving moth. Adding additional sensors and connections and complex behaviors can emerge.
It’s a small step to make a primitive camera or retina out of a lens and a simple grid of photosensors. Introduce the concept of “lateral inhibition” and you’ve got edge detection. Anyone who has worked with RoboRealm has seen this in action and where it can go from there: detection of blobs, lines, movement and direction. From there on to recognition of symmetry, curves and shapes.
Add a few layers of neural network and your robot generalizes consistently concurrent inputs into “concepts” or “ideas.” A series of delay circuits sorts these concepts into chronological sequence allowing for cause and effect relationships. A bit of memory transforms causal relationships into predictions.
Vehicles doesn’t attempt to provide a schematic diagram for a thinking robot; it was written in the crude infancy of artificial neural network research. What it does provide is a napkin sketch of a roboneurology capable of supporting the robopsychology of Isaac Asimov’s I, Robot science fiction from the 1940s. We desire for robots to be able to learn as we cannot possibly program them for every contingency. But, if robots can observe and form their own concepts, can they not, in the wrong environment, form their own misconceptions? If they can make their own generalizations and predictions, can they not develop their own prejudices? “Paging Dr. Calvin, paging Dr. Susan Calvin….”
— James H Phelan MD