27 May 2014
Last year, in 2013, I designed, built and programmed my own robot. It used reflectance sensors to follow the line and to avoid the obstacles. I was very proud to win the 2013 RoboCup Junior Belgium competition in the category Advanced Rescue. This year I decided to participate again. I could have improved my first robot, but I took the challenge to create a completely new robot. I used a Raspberry Pi to analyse video images taken by the Pi Camera and to command the Dwengo microcontroller board.
My robot wasn’t the fastest, but it worked as it should have, so I achieved my goal, and I even obtained the third place.
The project turned out to be not easy at all, but the image analysis part was very interesting as it involved not only complex data structures, but also lots of math, especially trigonometry. Hence, this project was much more complex than last year’s.
The Raspberry Pi was programmed in C++ using the openCV libraries, the wiringPi library (from Gordon Henderson) and the RaspiCam openCV interface library (from Pierre Raufast and improved by Emil Valkov). Using a camera has some big advantages: first of all, you don’t have that bunch of sensors mounted close to the ground that are interfering with obstacles and deranged by irregularities. The second benefit is that you can see what is in front of the robot without having to build a swinging sensor arm. So, you have information about the actual position of the robot above the line but also on the position of the line in front, allowing calculation of curvature of the line. In short, following the line is much more controllable. By using edge detection rather than greyscale thresholding, the program is virtually immune for shadows and grey zones in the image. If the line would have had less hairpin bends and I would have had a bit more time, I would have implemented a speed regulating algorithm on the base of the curvature of the line. This is surely something that would improve the performance of the robot.
I also used the camera to detect and track the green direction fields at a T-junction where the robot has to take the right direction. I used a simple colour blob tracking algorithm for this.
After the Raspberry Pi has found the line, it sends the position data and commands at 115,2 kbps over the hardware serial port to the Dwengo microcontroller board. The Dwengo board does some additional calculations, like taking the square root of the proportional error and squaring the ‘integral error’ (curvature of the line). I also used a serial interrupt and made the serial port as bug-free as possible. The Dwengo board sends an answer character to control the data stream. The microcontroller also takes the analogue input of the SHARP IR long range sensor to detect the obstacles and scan for the container. The microcontroller is controlling the robot and the Raspberry Pi does an excellent job by running the CPU intensive line following program.
To build the robot platform I followed the same technology as last year. However, the new robot platform is completely different in design. Last year, I made a ‘riding box’ by taking almost the maximum dimensions and mounting the electronics somewhere on or in it.
This time, I took a different approach. Instead of using an outer shell (like insects have), I made a design that supports and covers the parts only where necessary. The result of this is that the robot not only looks much better, but also makes that the different components are much easier to mount and that there is more space for extensions and extra sensors. The building instructions for the platform can be found here.
On the day of the RCJ competition I had some bad luck as there wasn’t enough light in the competition room. The shutter time of the camera became much longer. As a consequence, the robot had much more difficulties in following sharp bends in the line. However, this problem did not affect the final outcome of the competition.
Arne Baeyens - Robotanicus