The only physical sensor for our control software is an S-VHS camera that captures the field from above.
The camera produces an analog video stream in NTSC format.
Using a PCI-framegrabber, we feed images to a
PC running MS-Windows.
We capture
RGB-images of size 640
480 at a rate of 30 fps and interpret them to extract
the relevant information about the playing field.
Since the ball, as well as the robots, are color-coded,
we designed our vision software to find and track several colored objects.
These objects are the orange ball and all the
robots that have been marked with colored dots, in addition to the yellow or blue team ball.
To track the objects we predict their positions in the next frame and then inspect the video image first at a small window centered around the predicted position. We use an adaptive saturation threshold and intensity thresholds to separate the objects from the background. The window size is increased and larger portions of the image are investigated only if an object is not found. The decision whether or not the object is present is made on the basis of a quality measure that takes into account the hue and size distances to the model and geometrical plausibility. When we find the desired objects, we adapt our model of the world using the measured parameters, such as position, color, and size.