This writeup concerns a robot I built in 1995, while I was doing my PhD in Bath, UK. At that time, Inmos Trasputers were quite big in the news and people were talking about parallel processing and getting excited about what Transputers or other parallel computing scenarios could provide.
I managed to get my hands on a T425 transputer which is a fairly self-contained processor running at 20MHz, which seemed hugely fast at the time. Also it was a 32-bit processor so potentially capable of all kinds of awesome compared with the 8-bit 6502 and 16-bit 68000 Atari ST technology of the day. Nicely this device contains a microcoded multi-tasking kernel and did not need a lot of complex external circuitry to make it operational so it seemed like a good fit for robotics. In practice, I think that the processor in this robot would still have been too under-powered to do much interesting real time vision processing.
I wanted to build a robot that would fit in a Micromouse maze. I built this robot with two driven wheels and two ball bearing wheels. Since I was studying human and computer vision I wanted to add a camera to it so that it could try to process the visual scene. In the end I built a frame grabber for an analog camera that was mounted on a servo motor so it could look in a variety of directions. Also I built a processor board with 1MB dynamic memory, and an IO board that controlled a bunch of things including the drive motors and could also sense the motion using simple optoelectronic devices.
Because I had no software I had to write a whole transputer assembler. It was very crazy to do this, I know. But I did manage to write an assembler in C++ that would allow me to create programs for the device that could be loaded via a serial link from a PC – maybe I can post that code here if I find it.
I did manage to get the frame grabber working perfectly, grabbing frames and sending them back to the host PC so that I could view them on the monitor. I managed to get the motor drive working ok so that it could move about. However, at that moment I stopped working on it.
Realistically, I stopped because I was in the middle of my PhD and just working on this robot as a side project. I felt like I was being overcome by too much stuff to think about. I decided to drop the Nelson project and focus more on my PhD work, which was probably a good thing.
You can see here some photos of the boards. I used this pink wire method of prototyping that I learned at Marconi Avionics (GEC Sensors). It just involves wrapping very fine pink insulated with around plastic formers and can give good density for DIL packages, which is what everyone was using those days. It would be largely irrelevant with todays surface mount technology.
Below you can read my original write-up of this project:
Nelson: A small one-eyed robot
Nelson is a low-cost wheeled robot with true computer vision. It has been designed to allow experimentation in vehicle guidance using rough visual clues. The whole assembly measures only 5″ x 5″ x 7″. It has a single axis servo-oriented CCD camera with image processing and motion control provided by a 32-bit Inmos Transputer. This CPU includes a microcoded multitasking kernel and is well suited to control applications. Here, I describe Nelson’s hardware development and some initial experimental results.
Many animals have a sense of sight which, although not the general purpose system that we possess, nevertheless gives them useful information about the environment and allows them to find their way around. Since this sense is effective even for those animals with limited nervous systems, it appears that computationally cheap methods may exist which can be usefully harnessed for artificial vision. Such methods could allow a robot to avoid obstacles and navigate effectively – even in a relatively unstructured environment.
Nelson (named after the one-eyed British admiral) is intended to be a cheap test-bed for these ideas and has been constructed part-time over a period of one year at a cost of around $500.
The mobile robot was designed to be small enough to fit in a micro-mouse maze. This is a high contrast structured environment that presents interesting problems of navigation and collision early-warning. I believe that these may be solved without building complex internal environment models and without expensive computation.
2. Hardware Overview
Nelson was intended to be cheap and is therefore constructed from off-the-shelf components; nothing was hard to obtain (except perhaps the CPU.) Figure 1 shows a block diagram of the circuitry.
Nelson has two separately driven wheels and two idling ball-bearing castors (front and back.) The wheels are connected to the motors by a 45:1 reduction to allow maximum torque at a speed of 0.5 m/s. There are four plug-in vertically mounted boards which contain all the electronics. These custom made boards are all pink-wired on square-pad Eurocard off-cuts with 64-way edge connectors mating with a back-plane that is part of the robot base. Nelson has a small wide angle monochrome CCD camera mounted up front on top of a radio-control type servomotor, so that the camera can look 90 degrees to each side.
The robot is powered by NiCad batteries which mount at the front, or from an external source via umbilical. Power consumption is generally kept low by the use of CMOS components. When stationary, the robot consumes 6W – half of which is due to the video circuitry. Provision was made for the camera and frame-grabber to be powered down under software control.
2.1. Processor Board
The processor board is Nelson’s brain and uses a 32-bit Transputer running at 20MHz connected to 1MB of RAM. Transputers are useful for robotics because not much support electronics is needed and the chip contains a micro-coded multitasking kernel that implements process scheduling and inter-process communication. It can also boot up over a serial link. The chip has a few kilobytes of fast internal RAM that can be used to run time-critical code. The processor board includes a socket which accommodates a control link for use during development.
2.2. Analog Frame-Grabber Board
This board digitizes the composite video coming from the CCD. Features include two phase-locked loops and a 7-bit flash ADC. Most of the timing is done in a SAA1043 sync generator chip. The output from this board is a 7-bit (plus overflow) pixel bus and a number of timing signals. In theory, a frame grabber could have been built that ran directly from a CCD chip, employing minimal analog circuitry. This project, however, uses an off-the-shelf camera which produces composite video and the frame grabber is therefore more complex. This provision allows for flexibility of video source since it was also desired to digitize pictures from other cameras, TV, video etc.
2.3. Digital Frame-Grabber Board
The digital frame grabber board has 128k of static RAM in which images are collected and stored under processor control. The resolution is 320 x N, where N is an even multiple of field lines. The hard work is done in the logic, so the processor only has to set a bit in a register to start the grab and wait for an event (interrupt) to indicate the frame (or field) has been grabbed. Multiple frames can be grabbed and stored consecutively in memory with minimal processor intervention.
2.4. PSU and IO Board
The PSU uses a switched mode supply to reduce power consumption in dropping the batter voltage down to 5V. The motor drivers are L6203 bridges which allow each motor to run forward, backward and to freewheel. This arrangement is coupled with PWM, motor current monitoring and shaft odometry to control the vehicle speed and acceleration.
The is an eight channel ADC, sensing among other things, motor current and battery voltage. There are four optical gap units sensing drive shaft rotation and also position feedback on the camera servo. Eight micro-switches are used for crash detection. There is also a bi-directional IR communication link and the robot has a 2 x 16 character LCD panel on the top, plus some status LEDs and push buttons serving as a user interface. A piezo-electric sounder provides appropriate audio signals.
3. Details of Frame Grabber
Nelson’s frame grabber is constructed on two boards, each measuring 3.5″ x 4.5″ and forms half of the robot’s electronic hardware. This section gives some design details.
3.1. Analog Board
This board contains nine ICs, generates all the timing signals for the digital board and digitizes the video signal using a 6.25MHz pixel clock phase locked to HSYNC so that there is no pixel jitter. The video bandwidth in this project did not need to be very wide. The big problem with this board is supply voltages and logic levels. On-board power supply circuitry gives +6.5V, +12V and -12V from a 5V input. The SAA1043 runs from 6.5V and is not TTL or 5V CMOS compatible, so level conversion is needed in a variety of places.
Figure 2 shows a block diagram. The incoming video is terminated, buffered, and low-pass filtered before being DC clamped by the SL1488. This chip also contains an AGC amplifier so the board will accept a range of video amplitudes without upsetting the ADC. The video is digitized by the ADC207 converter. The video from the input buffer is also sent to an LM1881 sync separator providing some clamping pulses to the SL1488 and also composite sync to the SAA1043. This chip contains a 1.25MHz oscillator and PLL subsystem used to clock its internal counter chain. The internal logic and PLL are used to maintain line and frame lock with the external video source. Buffered blanking and sync signals from this IC are used by the digital board. The phase locked 1.25MHz clock is fed to a PLL multiplier circuit based around the NE564 to give the 6.25MHz pixel clock. The PLL VCO runs at twice this frequency so that a square wave can be fed into the phase detector input, which would not be the case if the divider was a “divide by 5” type.
This board was easy to set up and no problems were encountered. It can take up to one second to lock after the camera is turned on, but this is a function of the SAA1043.
3.2. Digital Board
Figure 3 shows a block diagram of the digital board which uses 23 ICs. The memory is accessed through a byte-wide data port and the RAM address is generated by an address pointer (using the counter chain) that can be initialized by the CPU. Before grabbing, the address pointer is set to the destination RAM address into which the data will be loaded sequentially. After grabbing, the CPU resets the pointer to the beginning of the data again and then reads from the data port. The logic is arranged so that each CPU read auto-increments the address pointer for fast access. This accessing arrangement needs only a few address lines to be decoded from the CPU and makes the back-plane bus small.
The vertical height of the image region to grab can be controlled: Two registers control the range of lines to grab by setting the start and stop lines. These are compared with the current line count and turn the grabbing logic on and off. The grabbing logic loads exactly 320 pixels from each line into the memory when grabbing is enabled. Grabbing is enabled following the VSYNC pulse if the processor has set a grab bit in the latch marked “A” on the diagram. The processor can also enable events (interrupts) that occur on VSYNC edges and signal to the CPU that a frame has been loaded by the logic. The grabbing control logic does not use many ICs but is quite involved since it needs to orchestrate a clean changeover between video and CPU access to the RAM.
4. The Transputer as a Controller
Nelson’s CPU is a 20MHz Transputer and this section gives some background on this device. “Transputer” here refers to the T4/T8 series of 32-bit processors by Inmos, UK. These have been used extensively for robotics. They are useful in robotics (in 1995, ed.) for the following reasons:
- A T800 Transputer has similar performance to the 68020/68881 combination, but can run with no external hardware. All that is needed is power, a 5MHz clock, and a serial link connection. An external program can be downloaded and stored in internal memory. If more RAM is needed, the external memory interface makes it easy to add SRAM or DRAM. Adding some IO latches is necessary to control external hardware.
- A 20MHz Transputer is faster than many of the micro-controllers used in small scale robotics. (Adding a constant takes 50ns.)
- Transputers will boot from an external ROM, or via a serial link. These links are bi-directional at 20Mbit/s. Additional Transputers can be easily networked for distributed processing.
- Prior to booting, Transputers can be debugged over the serial links, including reading and writing anywhere in the address space. This makes development quite easy.
- The most useful aspect is that Transputers will multitask your code using the built-in kernel. Any number of processes can be run and these are automatically task switched every millisecond or so. Precise timing can be achieved with the use of high priority processes waiting for external events or waiting on an internal timer. The kernel also implements process management and communication. Whenever a process must wait for a timer, interrupt, or communication, it is de-scheduled and another one starts up until the necessary criteria are reached for the first process to be run. In this way, it is never necessary to wait around in polling loops.
An example of this last point is the control of a servomotor. The control process outputs a logic 1 to the servo, waits on a timer, outputs a logic 0, waits again, and then repeats. As soon as it executes the timer wait instruction, the process is de-scheduled automatically and other processes get to run until the time is up. The task switching is sub-microsecond and the timer has a 1us resolution. The process does not need to worry about any other processes and an inter-process communication link is used to control the servo wait period in a non-blocking manner.
Nelson contains no ROM and this means that programs must be booted via a serial link from the host computer. This is convenient for development but annoying for demonstrations. When the booted program is running, the serial cable may be removed.
5.1. Development Support
For software development, a host interface card was constructed. This provides a PC with a high speed Transputer compatible two-way link. The system allows the status of the CPU to be monitored and the PC to reset, query or boot the processor board over the six wire link. Support software was written to allow for a range of debugging activities.
Software for the robot is written in Transputer assembly language. A cross-assembler is used to produce code on the PC host.
5.2. Proposed Control Software
The video frames were grabbed at 320 x 200, but were immediately down-sampled to 80 x 50 because of the software burden of this high resolution. Pairs of frames were grabbed at 5Hz. Each pair was spaced at 40ms, allowing primitive motion analysis to take place in about 200ms. It was intended that some simple segmentation of the image would take place to give an idea of the direction of free space right in front of the robot. This allows the robot to move forward in a micro-mouse maze, and control its trajectory to remain centered without collision with the walls.
6. Initial Results
The frame grabber hardware was tested by running code to continuously grab a 320 x 200 region of the image and send it down the serial link to the host. This image was then painted on the PC screen in 320 x 200 mode to create a moving display. This configuration achieved about 4 frames per second, corresponding to 256k bytes/s transfer rate (limited by the PC IO bus bandwidth, in this case.) A similar rate was achieved when consecutive frames were differenced on the CPU before transmission.
This report has covered the hardware development of a small visually guided mobile robot. This is a cheap platform for investigation into algorithms for visual guidance.