DLR SpaceBot Cup 2015 - News / Blog

12.03.2015: Team

Today, our whole team, consisting of students and research assistants, met for the first time.


Fig. 1: Attempto Tübingen - SpaceBot Cup 2015 Team

18.06.2015: Object Detection

For the first time we can detect the blue cup when it is tilted. In the following pictures you can see the raw camera image as seen by a camera mounted onto our manipulator. The cup is detected by color segmenation and its pose is estimated using ellipse fitting in a RANSAC scheme.

cup tilted and detected

Fig. 2: Detected cup in an image taken by our arm mounted camera

After an object has been detected, its pose has to be transformed into a global frame of reference. This way, we track the pose over time so that we can combine multiple measurements and improve the accuracy. Additionally, our robots can share the global object positions among themselves.

cup detected and transformed

Fig. 3: The detected cup is projected into a global reference frame and displayed along side the sensor data, the robot sees in front of itself.

24.06.2015: First Exploration Test

We performed our first exploration experiments using our buggies. One robot explored a large part of our department corridors autonomously. We plan to extend the exploration strategy to utilise multiple robots for the competition.

map of the explored corridor

Fig. 4 Exploration Map encoded as a cost map: Yellow cells are obstacles, in this case mainly walls. Gray cells are unknown and have not been observed yet. Red and Blue cells are free cells, encoded by their cost; The closer to an obstacle, the higher the cost and the more red a cell becomes. Cyan cells are on the frontier between unknown and known and are candidates for the next exploration target. Finally, the currently detected obstacles are visible on the lower left as a red point cloud.

path generated during map exploration

Fig. 5 A new exploration target has been selected (red cylinder). The robot has planned a path to the target, including a complex turning maneuver.

30.06.2015: Obstacle Detection

Our large Summit XL robots are equipped with 4 ASUS Xtion live pro cameras that can measure depth in addition to regular RGB images. The smaller exploration buggies will be using two of the same cameras. Within every depth measurement, we calculate a heat map, assigning to each node a danger value. This value is then thresholded and used to detect unsurmountable obstacles.

obstacle detection in test arena

Fig. 6: Heatmap generated for the forward looking camera. Red points are dangerous. The more dark a point is, the less dangerous it becomes.

10.09.-11.09.2015: Qualification

We participated in the qualification in Brückenforum Bonn.


Leia in the qualifiers

Fig. 7: Our robot in the qualification arena.

In the following video, we demonstrate the collection of a cuboid and a cylindrical object. To collect these objects, the objects are detected with the Asus Xtion RGBD sensors mounted on the top of the robot. Then, the base moves to a position from which the manipulator is able to grasp the object. A trajectory to a pre-grasp-position is planned and executed. Finally, by using visual servoing the object is grasp utilizing a RGB camera and a sonar sensor mounted on the arm. The different tasks are controlled by a Finite State Machine.

Video 3: Demonstration of the object collection for the DLR SpaceBot Camp 2015

13.11.2015: The main event

We successfully participated in the SpaceBot Camp 2015, which took place in Medienparks NRW, Köln-Hürth.



Fig. 8: Elevation map of the exhibition arena. It is a 2.5D map, with the (x,y) positions and height.



Fig. 9: Cost map of the exhibition arena. Yellow indicates obstacles, and grey the traversable terrain.