Complex visual processes such as visual attention are often computationally too expensive to allow real-time implementation on a single computer. To solve this problem we study distributed computer architectures that enable us to divide complex tasks into several smaller problems. In this paper we demonstrate how to implement distributed visual attention system on a humanoid robot to achieve real-time operation at relatively high resolutions and frame rates. We start from a popular theory of bottom-up visual attention that assumes that information across various modalities is used for the early encoding of visual information. Our system uses five different modalities including color, intensity, edges, stereo, and motion. We show how to distribute the attention processing on a computer cluster and study the issues arising on such systems.
«