3DPoseMap - 3D-Pose Estimation and Mapping with PMD-(Photonic Mixing Device) Camera
The 3DPoseMap project is a DFG-funded research project. It was set up in Februar 2006 for a duration of 4 years. In 3DPoseMap we are focusing on the possibilities of robot - pose estimation using the newly developed ToF-(Time of Flight) cameras.
3D-ToF cameras emit modulated light and measure the reflectance of the emitted light at objects. The phase-shift between the emitted and the reflected light directly relates to the distance the modulated light covered. This is a relatively cheap method (compared for example to laser range scanners) to measure the 3D-geometry of an object/room/etc.
In the projects the following interesting question are investigated:
This work is supported by the German Research Foundation (DFG), KO-2044/3-1 and KO-2044/3-2
ToF - Cameras
Currently there are only a few ToF-cameras available. Within the DFG-research package "Dynamisches 3D Sehen" new 2D3D-cameras are developed and the newly emerging possibilities of this cameras in robot navigation are explored in the project.
The available cameras are available in different types. The PMD (Photonic Mixing Device)-Sensors are available with resolutions up to 176x144 pixels. Currently we are using a sensor of this size. This is a very low resolution and not suitable for exact pose estimation. Therefore we combine this sensor with a high resolution 2D camera. The combination of two different cameras inherently has the problem of calibrating the two cameras relatively to each other.
Here the mobile Robot "Tom3D" from the
"University of Siegen - RST" is shown.
Another 3D ToF-camea,
the Swissranger from CSEM
Tof-Camera Calibration and Camera-Rig Calibration
To solve the problem of calibrating two or more cameras relatively to each other we developed a technique to automatically calibrate the intrinsic parameters of all cameras in a rig, including ToF-cameras. Depth measurement deviation and brightness deviation estimation is also included.
The results were presented at the XXI. ISPRS Conference in Beijing 2008. Please check our Publications-site for details.
The software which was developed can be downloaded from our Calibration-site.
Pose Estimation with ToF-Camera
Pose estimation of a camera with Structure-from-Motion approaches is widely used in Computer Vision applications. In this project we combine the techniques of SfM-Algorithms with a ToF-Camera. This has significant advantages over the standard SfM-Algorithms. It makes the need for lateral movement obsolete and allows metric positioning and distance measurement. As the field-of-view of the ToF-camera is small (22x17 degrees) methods have been developed to increase the FoV by depth panorama creation.
The generated depth panorama overlaid in a fisheye image of the 2D camera.
This panorama can be used for online tracking and pose estimation. The picture above shows the tracked camera path in a 3D-model of the surrounding.
Environment Model Generation
As the ToF-cameras deliver dense depth maps in real-time they are well suited for 3D model generation. We developed a method for the generation of dense indoor models by combining the ToF-camera with a CCD-camera and mounting both on a Pan-Tilt-Unit. Please see PMDMap for details. These generated models are perfectly suited for mixed reality applications and rapid previewing.
The approach has been extended to cover full 360 degree panoramas using a cylindrical projection:
A depth and a texture panorama is generated:
These panoramas can be transformed to a trianglemesh and real geometry is generated: