next up previous
Next: Automatic 3D Sensing Up: 6D SLAM with Approximate Previous: Introduction

3D Mapping - State of The Art

Instead of using 3D scanners, which yield consistent 3D scans in the first place, some groups have attempted to build 3D volumetric representations of environments with 2D laser range finders. Thrun et al. [15,28], Früh et al. [12] and Zhao et al. [30] use two 2D laser range finders for acquiring 3D data. One laser scanner is mounted horizontally, the other vertically. The latter one grabs a vertical scan line, which is transformed into 3D points based on the current robot pose. Since the vertical scanner is not able to scan sides of objects, Zhao et al. use two additional, vertically mounted 2D scanners, shifted by $ 45^\circ$ to reduce occlusions [30]. The horizontal scanner is used to compute the robot pose. The precision of 3D data points depends on that pose and on the precision of the scanner.

A few other groups use highly accurate, expensive 3D laser scanners [23,1,13]. The RESOLV project aimed at modeling interiors for virtual reality and tele-presence [23]. They used a RIEGL laser range finder on robots and the ICP algorithm for scan matching [7,9]. The AVENUE project develops a robot for modeling urban environments [1], using a CYRAX laser scanner and a feature-based scan matching approach for registering of the 3D scans in a common coordinate system [25]. Nevertheless, in their recent work they do not use data of the laser scanner in the robot control architecture for localization [13]. The research group of M. Hebert has reconstructed environments using the Zoller+Fröhlich laser scanner and aims at building 3D models without initial position estimates, i.e., without odometry information [16].

Recently, different groups employ rotating SICK scanners for acquiring 3D data. Wulf et al. let the scanner rotate around the vertical axis. They acquire 3D data while moving, thus the quality of the resulting map cruicially depends on the pose estimate that is given by inertial sensors, i.e., gyros [29]. In addition, their SLAM algorithms do not consider all six degrees of freedom. Nevado et al. present novel algorithms for post processing 3D scans/scenes and extracting planar models [18].

Other approaches use information of CCD-cameras that provide a view of the robot's environment [22,8]. Yet, cameras are difficult to use in natural environments with changing light conditions. Camera-based approaches to 3D robot vision, e.g., stereo cameras and structure from motion, have difficulties providing reliable navigation and mapping information for a mobile robot in real-time. Thus some groups try to solve 3D modeling by using a planar SLAM methods and cameras, e.g., in [8].


next up previous
Next: Automatic 3D Sensing Up: 6D SLAM with Approximate Previous: Introduction
root 2005-05-03