I am passionate about machine learning, intelligence, and robotics. I have a number of robot projects on the go. I wanted to build a platform that would allow me to do a lot of complex experiments on sensor fusion and creating intelligent emergent behaviors. I needed to make a robot that has quite a number of sensor inputs, but not so many that it would overload the processing capability to do anything useful. I decided to make a simple two-wheeled robotic platform that has a lot of flexibility and load it up with appropriate sensors.
One of the aspects of my robotics philosophy is that information from simple sensors can be highly informative and that current robot designs jump too quickly to complex high bandwidth data sources and they then do a marginal job of interpreting the information from those sources in software. I am inspired by insects and other small creatures that seem to have small numbers of sensors, for example eyes with only a few photoreceptors, but still have very complex adaptive behaviors which are often leagues beyond what we can do with today’s machines. Part of this is due to the efficiency with which they extract every little bit of useful information out of the sensory data, including correlations we would never think of. I am interested in applying experience gained from machine learning in order to extract from sensors information that could not easily be determined by using hand coded algorithms.
My rolling robot has two wheels and these have wheel encoders to give a feedback of position or wheel rotation speed. It also has an infra red range finder that can indicate the distance to surfaces from about 6 feet away down to a few inches. Two log-light sensors are included to sample the light field around the machine. These sensors from Adafruit are quite good because they don’t get saturated and they work over wide light levels. In addition there is a complete 9 degree of freedom IMU on board: accelerometer, gyroscope, and magnetometer. This, plus the wheel rotation sensors, allows for potentially very accurate knowledge of the robot’s state of motion.
The software I write makes use of the avr-gcc toolchain on the Mac and I use a Pololu programmer to download code via the ICSP interface. There is also a serial link which is useful for debugging. For example I have used it to print out all the live sensor data for calibrating the IMU.
One of the initial issues with this two-wheeled robot is the fact that it is not dynamically stable, in that it easily acts like a pendulum and rolls around doing somersaults uncontrollably, especially when there are sudden changes in motor torque. The dynamics are the same as balancing an inverted pendulum robot, however the stable point is with the weight at the bottom. I was expecting to have to do some fancy Kalman filter control system to stabilize the motion, but I found that merely adding to the wheel torque a contribution from the front-back accelerometer output was enough to lock it into a steady pose. When the robot falls forwards the accelerometer signal makes the wheels turn slower to bring it back into balance, and when it falls backwards the wheels speed up to bring the axle back over the center of gravity. This was an initial idea that happened to work, although I expect that I will produce a more complex system eventually.
I have used the range finder to write a simple program that avoids running into obstacles. It does this by randomly making turns that become tighter as the distance sensed by the range module become shorter. This also works surprisingly well, although now the main points of failure are to do with the limited field of view of the sensor – the robot often hits corners with its wheels that were not visible immediately in front. Another mode of failure is collision with low objects such as power cords, or carpet edges which stick up only slightly from the ground plain but cannot be imaged without the robot looking down. There are a lot of ways to recover from these problems which can be sensed using the disparity between IMU signals and wheel rotation, or by actually making the robot make turning or bobbing motions to scan a wider area with its sensor.
One principle that I want to make use of on this robot is the idea of the efference copy in neuroscience. This is where the commands that the robot makes to produce self motion are fed back to null out the components of its sensory signals that are the predictable consequences of those actions. That way the robot is only acting on environmental signals that have a component of “surprise” – an error signal that indicates departure from intended outcomes. For example when the robot makes a turn it expects that the IMU will produce a specific signal which is predictable given the robot’s dynamics. If that does not happen then something is wrong. Also if the robot is turning and taking distance readings, and then turns back again, the distance readings it sees should be consistent if it moves to face the original direction. Any difference implies a change in the environment.
Here is a video of a walkthrough of the robot hardware and a short presentation of its motion.
Here is a video that demonstrates the current simple obstacle avoidance behavior. This is a work in progress as I have only just started to think through some of the algorithms that are possible. The code has to compute everything in a 100Hz loop and so it has a limited time on an AVR micro controller to do all the computations necessary. It will be a challenge to come up with algorithms that are sufficiently capable and not too computationally expensive for complex behavior. I would very much like the robot to learn from its environment and potentially build some kind of map. I am thinking of doing some kind of “long short term memory learning”, although how much learning will be possible on the hardware and how much will make use of a simulation on the laptop, I am not sure.
Since my background is neuroscience and machine learning, I want to apply some of those principles to this robot, and particularly I am interested in the kinds of adaptive control systems that might exist in insect navigation and how we might learn from those to create interesting and organic behavior, and particularly how control systems might be defined in the external coordinate frame rather than in the sensor frame.