EXPECT THE UNEXPECTED
THE RIDDLE OF EMERGENT BEHAVIOR
Let’s Start Talking!
The biggest obstacle to the Emergent Behavior experiment has been bot-to-bot communications. With most of the other challenges met, the remaining mountain (that I know of) is this question. Of course, without a way for one robot to talk to another, there is simply no swarm. A bunch of robots running around treating each other as an obstacle, does not a swarm make! That’s a crowd.
I have turned to nature for my examples before. So I wondered, how does nature do it?
I keep coming back to the example of the ants. Ants are an excellent and common example of swarms. It is also fairly easy to observe them, so much is known about their behavior.
Ants exchange very little direct information, but still manage to create complex societies. Much of this is accomplished by a technique called stigmergy. Stigmergy is a means of communicating by changing the physical environment, either by moving something, or by adding or removing something from it. This is similar to leaving a bread crumb trail or breaking branches as you walk through the woods, to show someone following you the way that you have gone.
The concept of stigmergy was first described by Pierre- Paul Grasse in the 1959 to describe the indirect communication taking place among individuals in termite colonies.
Termites infuse a mudball with a pheromone which is attractive to the other termites, who then pile their pheromone infused mudballs on top of the first one and eventually a termite mound is constructed. Ants use a similar pheromone technique to identify the safe path to the food, but it is not their only form of communication. Besides putting pheromone on the ground to identify the path, they also often use their antennae to exchange a brief communication with each other when they meet.
It is this brief direct exchange (this data is also transmitted using pheromones) that tells the other ants if it is safe outside, or if they need to repel an intruder or other things. Then there is a reaction to that information. Since laying a pheromone path on the ground is pretty complicated, (Parallax doesn’t yet have an odor dispenser/sensor set for the Boe-Bot,.. that I can find) it is the direct communication that I will need to simulate.
This has proven to be the single greatest challenge to the project.
If you started with me, you may recall from the first few articles that the first idea was to use IR to communicate. The only problem with that was that the range is too great. One bot at the feeding station could be talking to a bot all the way across the field. The bots are supposed to be passing food, not footballs! I would have needed a gymnasium to complete the experiment.
One helpful reader suggested using a resistor to dim the IR LED and reduce the range, but experimenting with that proved a lack of reliability. Also, the IR is also used to line up on the beams from the feeding stations and it all just adds up to too much light pollution to use IR for both types of communication.
One method I hoped to use was to use LEDs on the robots and communicate by blinking at each other. But I couldn’t find what I thought would be the right kind of sensors that would be able to read different colors. Also, when I tried to read the light with a phototransistor, the background light drowned out any possibility of reliably detecting the signal, except at extremely short ranges.
RF seems like an obvious choice for swarm communications and it is! Most swarms use some form of radio to ensure all the robots can hear each other and share information. But for my experiment, the problem with radio is exactly that! All the robots can clearly hear each other.
This matters for a few reasons. One is because the robots have no absolute position information, so we cannot be sure which individuals are adjacent. (Only relative positions are known) More importantly, the point of the communication is to simulate a physical exchange and save having to build some mechanism to actually exchange some- thing. Although it has been difficult enough, I am starting to wonder if a physical system would have been easier.
So it was time for more research. Research is cool because you find out what other people are doing and not only does it give you ideas, but also a chance to say “Wow, that’s a cool idea!” For example, at one point I had this thought that maybe the ultrasonic sound on the Ping ))) could be modulated to carry data. My research turned up an interesting find in this link on the Parallax discussion forums:
As usual, I was not the first to consider this. It seems that at least a few others have been doing the same thing, in various ways, for a while. Basically, it comes down to covering up the receiver side on one Ping and the transmitter side on the other. Then when ping is listening for an echo, it is actually hearing the transmitted sound from the other ping.
While a very cool idea, it defeats the range detection function of the ping ))) by converting it into a transmitter or a receiver. Neat, but like Edison, I simply found another way that it would not work. But it gave me an idea. I realized that range detection could actually be the key.
So I start to work on range, as well as
communication. Range detection is one use of IR. Range can be determined by measuring the brightness of the infrared reflection off the target surface. Obviously, measuring this distance is a different type of pulse than communications, so the two cannot be generated by the same IR LED and having a second one will cause interference.
Not to mention that the beacon is broad- casting continuously, and so must be watched for all the time.
So IR can’t be used for range and communication at the same time but (and here is the eureka moment), we are already measuring range. The PING))) sensor is constantly checking for obstacles in the path of the robot. It is pretty likely, that in the swarm environment, the obstacle is either another robot, or the wall. Ignoring the beacon for the moment, it is a simple matter of transmitting a “ping”.
If it is a robot, it will answer. If it is a wall, then it will obviously not answer and the default action of a turn is executed.
The easiest way to envision this logic is to use a step chart. Here is a simplified step chart, showing just the steps involved with handling an encounter with an unknown obstacle and determining if it is a robot:
- Listen for .5 seconds: if receive, “ping” go to 10
- If receive nothing, broadcast: Here I am
- Move forward
- Check for obstacles, if obstacle found, transmit ping
- If receive ping, go to 10
- Check direction of the “last turn” variable and turn the other way
- Change the direction of the “last turn” variable
- Transmit ping
- Go to 1
- Begin communications with robot.
This brings on the problem of two robots talking at the same time. This can be only partially solved by listening first to ensure that another bot is not already transmitting. Of course, there is the likelihood that two bots will begin transmitting at the same time. If this is happening, then the receiving bots are probably getting garbage anyway, so the information will need to be repeated. I haven’t tested this yet, but when the whole swarm is out and talking, there is going to be some confusion that will require some tweaks to the code.
Implementing this in the code is actually quite easy. The “ping” and the “listen for ping” code is already basically there. I wrote it as the start of the communications protocol originally developed for IR when I thought that would work by itself. It was just a matter of putting that code into subroutines because they will now be called more often and from more places. The code itself needed very little in the way of tweaks.
Physically, I need to widen the aperture on the receiver to get a broader angle. Otherwise, the transmit- ting robot would need to be squarely in front of the other robot in order for the IR flash (ping) to be seen.
In practice, the robots seemed to still not be seeing the “ping” from each other a lot of the time. In cases where one robot approached the other from the side, where the sensor is pointing away from the IR LED, the message would not be received at all . At this point, I got another ‘flash’ of inspiration. Add side sensors!
Two more IR sensors per robot were added, one watching the left and the other, the right side of the robot. This improved response, but immediately another problem is discovered. The robots talk too much!
Whenever the robots started talking, the moment they were done, they would detect each other and the conversation would start over. I had created an infinite proximity loop. Time was critical! I needed to give the robots a little ‘down time’ after talking with another robot. It was a simple matter of adding in a variable to just stop trying to talk to the obstacle for 10 seconds.
This worked and now I have achieved success! The robots approach each other, exchange a handshake, stop listening and avoid each other for 10 seconds. This gives them enough time to maneuver around each other and carry on with the mission.
Now that the technical difficulties of the communication has been overcome, it is now possible to concentrate on the code. Currently, the code is sloppy and piecemeal. It is the result of many changes and adjustments that come with writing and debugging a program. I have to be cautious. Except for the collision avoidance and IR handshaking, none of the behavior rules are programmed yet and 43% of the pro- gram memory is committed.
So now the true fun begins. Getting the programming right. This is my favorite part. This is where I get to affect the world in a physical way using software I typed into a keyboard. It doesn’t matter how small or big the movement is, but when I make some- thing move around using nothing but code, I get very excited about that.
Parallax, Inc, parallax.com