next up previous
Next: Hierarchical Generation of Reactive Up: FU-Fighters 2000 Previous: Mechanical and Electrical Design

Tracking Colored Objects in the Video Input

The only physical sensor for our behavior control software is a S-VHS camera that looks at the field from above and outputs a video stream in NTSC format. Using a PCI-framegrabber we feed the images into a PC. We capture RGB-images of size 640$\times$480 at a rate of 30 fps and interpret them to extract the relevant information about the world. Since the ball and the robots are color-coded, we designed our vision software to find and track multiple colored objects. These objects are the orange ball and the robots marked with two colored dots in addition to the yellow or blue team ball.

To track the objects we predict their positions in the next frame and then inspect the video image first at a small window centered around the predicted position. We use an adaptive saturation threshold and intensity thresholds to separate the objects from the background. The window size is increased and larger portions of the image are investigated only if an object is not found. The decision whether or not the object is present is made on the basis of a quality measure that takes into account the hue and size distances to the model and geometrical plausibility. When we find the desired objects, we adapt our model of the world using the estimates for position color, and size. We also added an identification module to the vision system that recognizes the robots by looking at a black and white binary code.


next up previous
Next: Hierarchical Generation of Reactive Up: FU-Fighters 2000 Previous: Mechanical and Electrical Design
Sven Behnke
2001-01-16