Control of a mobile robot with evolved feature detectors

Genetic programming was used to localize a mobile robot in a simulated environment (Figure 1). We evolved a function which maps distance readings of the robot's sensors to robot positions. This function represents an internal model of the environment. The robot perceived its environment using a laser range finder or a similar device which supplies distance information. We used moments of the distance distribution as terminal symbols. Arithmetic and trigonometric functions and a conditional statement were used as primitive functions.

Figure 1: Where is the mobile robot?

A control architecture was evolved for a real service robot. The experiment was carried out entirely on the real robot. The robot perceived its environment using 6 virtual sonar sensors. The robot always moved with constant translatory velocity. The control architecture's task was to navigate the robot inside a corridor. The result of the experiment which took two months to complete can be seen in Figure 2.

Figure 2: Path of the robot using the evolved control architecture.

Using genetic programming we evolved interest operators which can be used to calculate sparse optical flow. To evolve the interest operators we defined several quality measures which are important for the calculation of sparse optical flow. The quality of an image operator always depends on the current environment and the given task for which the operator is used. We used several filter operations such as Gaussian and Gabor filters as primitive functions. The use of genetic programming enabled us to construct an adaptive operator for the extraction of interesting points. Figure 3 shows the sparse optical flow which was calculated using the evolved operator.

Figure 3: Sparse optical flow calculated using the evolved operator.

The algorithm for visual control of a mobile robot which was developed during the previous year was extended. Now the translatory velocity of the robot is also controlled. The algorithm extracts interesting points of the image sequence and calculates sparse optical flow induced by the translatory motion of the camera. Status information of the robot is used to compensate for the rotatory motion of the camera. The optical flow is transformed into complex log space (Figure 4) and the difference between the left and right peripheral flow is used to control the rotatory velocity of the robot. The translatory velocity is controlled in such a way that the perceived optical flow remains constant.

Figure 4: Inverse image of the transformation into complex log space.

This work was supported in part by a scholarship to the author according to the Landesgraduiertenförderungsgesetz.