The global search method is needed if robots are lost, e.g. due to occlusions. It is not needed for the ball, since we enlarge its dynamic search window until it covers the whole field.
Global search analyzes the entire field after the found objects have been removed from the image. We search for the colors of the dots contained in the models of the lost robots and try to combine them to valid robots.
To reduce pixel noise, we spatially aggregate the color information as follows. First, we compute the distance of all pixels to the desired colors in RGB color space. Next, we subsample these distance maps twice, taking each time the average of four pixels, as shown in Figure 3. Now, we find the best positions for each color, taking into account minimal dot distances. Using the lists of dots found, we search for combinations of dots that fulfill the geometrical constraints of the robot models. This search starts with the dots that have the highest quality. If more than one robot from a team is lost than we assign the found robots to the models that have the closest positions.
We avoid calling the costly global search routine in every frame, since this would reduce the frame rate and would therefore increase the risk of loosing further objects. It would also add overhead to the reaction time of the overall system. Fortunately, the global search is needed mostly when the game is interrupted, e.g. for a kickoff. Then, people moving robots by hand cause occlusions that trigger the global search. To avoid this interference, the FU-Fighters robots position themselves for kickoff.
During regular play, the processor load caused by the 30Hz computer vision is as low as 30% on a Pentium-II-300, leaving enough time for behavior control and communication that run on the same machine.