EXPECT THE UNEXPECTED
THE RIDDLE OF EMERGENT BEHAVIOR
The Build Must Go On!
One of the nice things about a big project is that if you are stuck with a problem, you can always work on something else. The communication woes covered in the last issue, continue. With no clear resolution to the problem yet, so I have shifted my focus, for the moment, to another part of the project. The nest.
Part of the design of the original experiment is to have two stations for the robots to travel between. One station provides “food” and disposes of “waste” and the other is the “nest”, which does the exact opposite: it accepts “food” and needs to get rid of “waste”.
This simulates the robot insects bringing food from a source to the nest and removing any by-products to clean out the nest. These are two of the small set of rules that ants use, in the real world example of emergence. The difference is that ants will have one job. An ant is either a harvester (collect food) or a nest maintenance ant (removes waste). I have combined the jobs into one robot to reduce traffic and improve efficiency.
Getting the robots to navigate between the stations is a challenge, mainly because of the lack of references on the field for the robots to calculate a precise position. I am hoping to use a combination of dead reckoning and beacon following to achieve my desired result.
Here is the plan: there will be a Parallax Basic Stamp powered board of education, at each end of the field; each with its own IR beacon which will pulse with a unique code, identifying it as either the nest or the food source.
Unlike the bot-to-bot communication, where the range of the IR LED is a problem, here it works to my advantage. In fact, it is perfect! The range is sufficient to reach all the way across the field. This should make navigation better. I should be able to start looking for the beam as soon as I turn the Boe-Bot around.
As a starting point, I will create a 20-degree spread on the beam. After some trials, this will undoubtedly change. To accomplish the beam, I created a simple baffle from black construction paper. I want to use something light and cheap, so that I can experiment with it. In practice, the robots may need more or less and I want to be able to adapt, at this stage, quickly and easily. The paper is folded to resemble a high frequency speaker horn except that it will be oriented vertically. This is to create a beam that is narrow and tall.
The beam has to be tall so that a robot can still ‘see’ the beam, even if it is behind another robot. To maximize this effect, the beam is mounted on a tower. If you have read my article on prototyping, you know I like to work with cardboard. Again, this is to facilitate easy changes.
Until the swarm is operating properly, I know I will need to tweak things like tower height. The cardboard makes this easy. When I am satisfied with the design, I can rebuild it using a more durable material. The IR detector on the robot gets a baffle too; otherwise the detector can see the beam from the side. We don’t want the robot to have peripheral vision, or it won’t track straight in on the beam. This baffle is just a square tube and again, I use the paper that I can easily modify. It must be square to match the straight vertical edge of the beam. The intent is to get the IR “beam” as narrow as possible and still have the robots find it.
This is similar to the localizer beam used for guiding airplanes in for landing, when they are flying on instruments. The robot also has to be able to tell when it has come close enough to food or the nest. Parallax has excellent ultrasonic range finding products: enter the Ping))).
The tower, as you can see in the photo- graph, has a little wall in front of it. Eventually, this will be mounted flush with the arena wall. As the robot tracks inbound on the IR beam, the Ping))) will measure the distance to the wall. When the robot gets within a predetermined distance, it will ‘announce’ that it has reached the ‘nest’, given the appropriate product and return to the other side of the field to make the other exchange.
This exchange is accomplished using the same beam that is broadcasting the localizer. The protocol I am working on will include a series of handshakes, exchanges and confirmation, but for the sake of testing the beam, I am using a simple, steady pulse. It is not modulated to encode any data, the LED is just emitting a steady stream of IR pulses.
The Boe-Bot is then programmed to weave back and forth, searching for the beam. When the IR signal is detected, the robot drives straight. If it is lost, then the robot starts to weave again. Repeating this pattern should draw the robot closer and closer to the center of the nest. When the PING))) sensor reports five centimeters, the robot stops.
Real life versus the best laid plans The first big challenge was getting the weave to work just right. I want a basically straight course with a turn first to the left and then through center to the right and back through center to the left and continuing straight when the beam is detected. The idea being that when the beam is lost the course is corrected by the weaving motion.
Just getting a straight course turns out to be fairly difficult. Because of the limitations I placed on myself for simplicity, I lack some of the sophistication that a more feature laden package would provide. As a result, quite a few frustrating hours were burned on perfecting the little things, like driving straight. This effort is causing me to consider wheel encoders and in fact, it may prove that these will become necessary.
Once the driving part was working OK, I began working on the detecting part. Immediately I had to start experimenting with the baffle shape. Too long and the detector never sees the beam at all. The elevated beam was a problem for a long time. When the robot is far away, the angle is low but as the robot approaches the tower, the beam angle increases.
A few more tweaks and some new baffle shapes and finally, the robot tracked well toward the target, measured the distance to the wall and stopped reasonably close to the target.
It is my favorite part of experimenting with robots. When the code and the hard- ware come together, the result is somewhat magical. I get a good deal of personal satisfaction when I can make a machine represent my thoughts and ideas in motion. Though they are just gears and motors, switches and wires, silicon and plastic; when they are doing what I programmed them to do, I am fulfilled.
When it is part of a large project, it is a good feeling to know I am one step closer. The next step is to build the arena and start adding logic to proceed from one side to the other and back again. It is time to begin with the next challenge; the rules.,
Last issue, we solicited ideas from our readership for solving the communication challenge. We received some interesting suggestions from Ken Lawler which we included in our Robot Feed column. There, you can see what Ken’s suggestions were and my initial response and thoughts.
Again, faithful emergent behavior reader, do you have any ideas to help me solve any of the challenges I am facing? If you do, feel free to send them to me at: firstname.lastname@example.org and I will try out your best ideas and report on them in the next issue. Maybe they will make good YouTube videos.