next up previous
Next: Ball and Robot Models Up: Robust Real Time Color Previous: Introduction

Robust Tracking

The input for our vision system consists of an analog video stream produced by an NTSC S-Video camera mounted above the field. We capture the image using a PCI frame grabber at a resolution of 640$\times $480 pixels in RGB format. The frame rate is 30fps. This produces an enormous data rate of 26MB/s from which we want to extract the few bytes relevant to behavior control. We need to estimate the positions of the ball and the opponent robots, as well as the positions and orientations of our robots.

The vision system analyzes only those parts of the image containing the field. The background is ignored. This reduces the data rate, but it would still not be possible to analyze the entire field in every frame without using special purpose hardware.

We therefore decided to develop a tracking system that predicts the positions of the interesting objects and analyzes only small windows around them. With a high probability, the objects will be within these small windows and we do not have to process most parts of the image.

Many other teams at RoboCup'99 relied on special hardware, like FPGAs or DSPs to process the entire image in real time [3,4].



 
next up previous
Next: Ball and Robot Models Up: Robust Real Time Color Previous: Introduction
Sven Behnke
2001-01-16