next up previous
Next: Behavior Up: RoboCup'99 (F180) Team Description: Previous: Electrical Design

Video

The only physical sensor for our control software is a S-VHS camera that looks from above at the playground and outputs a video stream in NTSC format. Using a PCI-framegrabber we input the images into a PC running MS-Windows. We capture RGB-images of size 640x480 at a rate of 30fps and interpret them to extract the relevant information about the world. Since the ball as well as the robots are color-coded, we designed our vision software to find and track multiple colored objects. These objects are the orange ball and the robots that have been marked with colored dots in addition to the yellow or blue team ball.

To track the objects we predict their positions in the next frame and then inspect the video image first at a small window centered around the predicted position. We use an adaptive saturation threshold and intensity thresholds to separate the objects from the background. The window size is increased and larger portions of the image are investigated only if an object is not found. The decision whether or not the object is present is made on the basis of a quality measure that takes into account the hue and size distances to the model and geometrical plausibility. When we find the desired objects, we adapt our model of the world using the measured parameters, such as position, color, and size.

For each of our robots we compute a local view that is used as input for the control software.


next up previous
Next: Behavior Up: RoboCup'99 (F180) Team Description: Previous: Electrical Design
Sven Behnke
1999-10-07