Multimedia Information Processing

Light Field Capturing

In our multimedia lab we are maintaining a large movable multi-camera array for room-scale light field capturing, featuring commodity color cameras and Time-of-Flight cameras for depth acquisition.

The capturing system currently consists of 24 IDS uEye RGB cameras and 2 Kinect v2 RGB-D cameras assembled on a beam to span a horizontal range of approx. 2.5 meters. The beam can be moved by two linear axes within a horizontal range of 25 cm and 2 meters within the vertical direction, using two isel iMC-S8 microstep controllers with very precise positioning. Image capturing with the uEye cameras is synchronized by an external hardware trigger.


Research Topics

The captured RGB-D data can be used as a basis for several applications, e.g., full-parallax imaging, free-viewpoint video and content creation for 3D displays, augmented reality, analysis of lighting and materials, and digital video post-processing.

Current research topics based on the light field capturing system involve amongst others:

  • Calibration of multi-camera capturing systems (intrinsic/extrinsic calibration, hand-eye calibration, color correction, calibration of depth cameras)
  • Multi-modal sensor data fusion (fusion of depth maps, color and depth fusion)
  • Light field representations for dynamic scenes
  • Efficient representations for large-scale light fields (adaptive sparse grids, kd-trees, tensors)
  • Coding, compression, and storage of dynamic light field data
  • Novel view synthesis from dense/sparse light field samples of dynamic scenes
  • Real-time acquisition, processing, and rendering of light field data


Project Partners and Funding


European Training Network on Full Parallax Imaging
supported by the Marie Skłodowska-Curie actions under the EU Research and Innovation programme Horizon 2020
Project website:
Effiziente Rekonstruktion und Darstellung großflächiger dynamischer Lichtfelder
(Efficient reconstruction and representation of large-scale dynamic light fields)
supported by DFG – German Research Foundation (Deutsche Forschungsgemeinschaft)
Parallel On-Line Methods for High Quality Lightfield Acquisition and Reconstruction supported by Intel Labs, Computational Imaging Lab Intel-Logo

Deformation Tracking







The deformation tracking project has its focus on the reconstruction of flexible objects from depth and color videos. The Analysis by Synthesis (AbS) method developed in this project is able to meet the different requirements of various applications by its modular approach.
The applications range from real-time tracking of partly occluded objects and Human Computer Interaction (HCI) to grasp evaluation for robots and material parameter estimation based on the visible deflection. The following list features a selection of AbS methods and applications of this project:



Image Edit Plugin:img
Reconstruction of Deformation from Depth and Color Video with Explicit Noise Models

Andreas Jordt, Reinhard Koch: Time-of-Flight and Depth Imaging: Sensors, Algorithms, and Applications (LNCS 8200), Pages 128--146, Springer Verlag 2013, ISBN 978-3-642-44963-5
Links: Abstract Edit Plugin:alink, Full Paper (Self-Archived Version), Bibtex
The final publication is available at


Flexpad: Highly Flexible Bending Interactions for Projected Handheld Displays

Jürgen Steimle, Andreas Jordt, Pattie Maes: ACM International Conference on Human Factors in Computing (CHI 2013). ACM Press.
(Best Paper Honorable Mention Award)
Links: Abstract Edit Plugin:alink, Full Paper, Bibtex



Direct Model-based Tracking of 3D Object Deformations in Depth and Color Video

Andreas Jordt, Reinhard Koch: International Journal Of Computer Vision (IJCV). Springer Verlag.
Links: Abstract , Full Paper (Submission Version), Bibtex
The final publication is available at


Estimation of Material Properties and Pose for Deformable Objects from Depth and Color Images

Andreas R. Fugl, Andreas Jordt, Henrik G. Petersen, Morten Willatzen, Reinhard Koch: In Pattern Recognition, Proceedings of the DAGM/OAGM 2012, Springer Verlag, LNCS 7476, Pages 165--174.
Links: Abstract , Bibtex


Fast Tracking of Deformable Objects in Depth and Colour Video

Andreas Jordt, Reinhard Koch, BMVC 2011, Proceedings of the British Machine Vision Conference.
Links: Abstract , Full Paper, Bibtex


An Outline for an intelligent System performing Peg-in-Hole Actions with flexible Objects

Andreas Jordt, Andreas R. Fugl, Leon Bodenhagen, Morten Willatzen, Reinhard Koch, Henrik G. Petersen, Knud A. Andersen, Martin M. Olsen, Norbert Krueger: ICIRA 2011, Proceedings of the International Conference on Intelligent Robotics and Applications.
Links: Abstract , Full Paper, Bibtex



Indoor Mapping

Automatic Mapping of Indoor Manhattan World Scenes using Kinect Depth Data

Today there are no simple and cost-effective systems available to map interiors. The purpose of the research is the development of a low-cost solution for automatic mapping of empty or slightly furnished interiors.

3d reconstruction of a slightly furnished room



Robots for Handling of Flexible Objects

Project Description

Industrial production today relies heavily on automated tasks that are performed by robotic assembly lines. This approach works well if the objects are fully specified industrial assembly parts and do not change in size, appearance, etc., like bin picking of predefined parts. Robots can perform such sorting and picking task nearly “blind”; with limited visual capabilities. If, however, objects originate from the natural world and vary considerably in size and appearance, this approach will fail.


Hence there is a need for increased visual sensing capabilities, resulting in intelligent, “seeing” robots that can decide what they grasp by visual sensors and object learning.
Such naturally changing objects occur frequently when handling food or other flexible and deformable goods, like cloth or rubber, where traditional robot assembly will be useless and the production has to resort to manual labor. Southern Denmark and Schleswig-Holstein have a rather large food industry, which could become endangered if production costs rise. In this case it would be advantageous to upgrade production with intelligent robotic assembly, which at the same time will upgrade the qualifications and required skills of the workers. The tedious manual work can be performed by the robots while human skills will be upgraded to control robotic tasks



Project Goals

The IRFO project will enable companies in the region to upgrade their production facilities and to retain production in their region.

Possible areas of use for intelligent 3D vision systems are to help evaluate the status of live stock to automatically regulate food supply for feeding, to sorting of live stock. As well meat factories as slaughter houses can use 3D vision systems when meat is chopped and has to be handled and packed.

In this project we intend to develop a robot-vision platform that will enable companies to automate handling of natural goods, live stock and deformable objects. The platform contains a 3D sensor that simultaneously captures 3D shape and color in depth-video-sequences, which will allow modeling of 3D shape and time-dependent object deformation. The object models will be utilized either to evaluate live stock for growth, meat status etc., or to handle flexible objects with the robot. The modeling of internal forces of the object can predict the deformation over time and the robot will be able to grasp the deforming object correctly and to handle the object.
In addition to the direct benefits of the project for regional companies handling natural goods, is the indirect benefit for high-technology companies that will take over the developed technology and marked it towards specialized market segments. For each market segment, adapted technology must be developed to optimize the approach and to ensure proper support.

The project will develop the base technology and will support the regional companies further on, but the high-tech companies will multiply the technology towards the different market segments.

For more information, see


IRFO Vision System


Our task in the IRFO project is to design the vision system as well as the development of the image processing algorithms. The current version of the vision system consists of two parts:
The laser stage, which is has line structured light setup, scanning the objects on the conveyor, and a Time-of-Flight stage, tracking the objects while the robot interacts with them.

The Laser Stage:
This section is designed to create 3D models of the objects that pass by on the conveyor belt. The high resolution models are used to initialize the an 'undeformed' geometry of the object and to calculate the robot grasp. The data from the laser stage are the base for further computations.
Example scan of a piece of artificial meat

The Time-of-Flight Stage:
The purpose of the ToF section is to track the deformation of the object that occurs when the robot starts to interact with it. The deformation can be used to calculate parameters describing the material parameters, e.g. flexibility.This parameters are then used to simulate the behavior of the flexible object.

Here is an example of the Time-of-Flight stage applied to the surface tracking of live stock (cattle). The IRFO software is used to track significant points on the cows back which allow to calculate body condition parameters.







3D Bild


This project is funded by the BMWi within the InnoNet program.


The goal of this project is to manufacture large-scale premium class 3D pictures which can be viewed without visual aids, like e. g. polarized eyeglasses. Therefore it is necessary to establish and refine the individual production steps that are needed for industrial production.


The 3D picture can be thought of as an array of very small slide projectors. In front of the light source is a developed film, exposed with the picture, similar to a reversal film. This picture has a size of 256x256 pixels adding up to a total of 65,536 pixels. On top is a lens system with a diameter of 2 mm to reduce aberration.
These "small projectors" are packed very tight resulting in about 250,000 lens systems per square meter. Depending on the vantage point every lens shows a pixel from the underlying film yielding the overall picture that can be seen. A slightly different vantage point will therefore result in a different picture. This dense light field grants stereo vision in a natural way.

Main tasks of the CAU, among other duties, are:

*  simulation of the 3D picture for one person
*  reconstruction of the light field, sampled by calibrated cameras, for rigid scenes
*  reconstruction of the light field, sampled by uncalibrated video cameras, for rigid scenes
*  development of a renderer to deliver the pictures needed for the film exposition


Project partners


Research partners:

  • Fraunhofer IPM
  • Fraunhofer IPT


Industry partners:

  • RealEyes GmbH
  • AutoPan GmbH & Co
  • Euromediahouse GmbH
  • Meuser Optik GmbH
  • Kleinhempel Ink-Jet-Center GmbH


Associate partners:

  • Viaoptic GmbH
  • Soul Pix












































































































































































































































































































































































































































































































































































































































































































































































































































































Seafloor Reconstruction

3D-Modeling of Seafloor Structures from ROV-based Video Data

The goal of this research project is to investigate and develop the necessary adaptations to classic 3D reconstruction methods from the area of computer vision in order to apply them to underwater images. Applications can be found in the areas of Geology and Archaeology. Thus, the project is in collaboration with the Geomar Helmholtz Centre for Ocean Research Kiel, the scientific divers group of Kiel University and the group for maritime and limnic archaeology of Kiel University. In addition to the DFG financing, parts of the project have been financed by the Future Ocean Excellence Cluster. The objective of the cluster is to gain knowledge about a whole range of topics concerning the so far largely unknown deep ocean.

Some the image data to be examined coming for the area of Geology has been captured in great water depths, for example using the ROV Kiel 6000 (Remotely Operated Vehicle) that can reach water depths of 6000 m.
ROV image scaled to 800x600
Equipped with several cameras, one of them being a HDTV camera, it is used to examine black smokers, a type of hydrothermal vent, found for example at the bottom of the Atlantic Ocean (Wikipedia link).
Because of limited diving time during which scientists need to complete a variety of examinations, the task of computer vision is to compute 3D reconstructions of the black smokers. In order to examine and measure the vents after the dive, a 3D model including the absolute scale needs to be determined. The 3D reconstructions are computed with a state-of-the-art structure from motion approach, that has been adapted to the special conditions of the underwater environment.

Special characteristics of the underwater imaging environment in general and the black smokers specifically, that need to be considered include:

  • optical path/refraction causes errors in geometry estimation
  • scattering and absorption of light cause green or blue hue in images and low contrast and therefore impede feature matching, and
  • floating particles, moving animals and smoke violate the rigid scene constraint.

Ozeangrund und Schwebeteilchen. Quelle: IFM-Geomar Image Image


Color Correction

While still traveling through the water, light is attenuated and scattered depending on the distance traveled, causing the typical green or blue hue and low contrast and visibility in underwater images. The Jaffe-McGlamery model can be used to model these effects. Simplifications of the model equations are applied in many color correction algorithms in the literature. Usually, the distance the light traveled through the water needs to be known. After running the SfM algorithm, and after computing the final 3D model, those distances are known. This allows to apply a physics-based, simplified model equation for color correction to the texture image:



Refraction at the underwater housing causes the light rays to change their direction when entering the air within the housing. To be exact, light rays are refracted twice, once when entering the glass and again, when entering the air. In the literature, the perspective pinhole camera model including distortion is used for computing the reconstruction. A calibration below water causes focal length, principal point, and radial distortion parameters to absorb part of the error, hence the perspective calibration can approximate the effects. However, a systematic model error caused by refraction remains due to the single view point model being invalid.
In the image, this can be observed by tracing the rays in water while ignoring refraction (dashed lines) - they do not intersect in the center of projection. It can be easily shown that this model error leads to an accumulating error in pose estimation, when using the perspective model for pose computation.

Therefore, refraction has been modeled explicitly in the whole reconstruction pipeline:

  • calibration of underwater housing glass port, assuming the camera's intrinsics are known
  • Structure-from-Motion algorithm that explicitly models refraction, and
  • dense depth computation using a refractive Plane Sweep method.

The corresponding publications for all three components can be found here.

The complete pipeline allowed for the first time to reconstruct 3D models from multiple images captured by monocular or stereo cameras with explicitly modeled refraction at the underwater housing and the major conclusion was that the systematic model error caused by using the perspective camera model can be eliminated completely by using the proposed refractive reconstruction.

The following figure shows results on real data captured in a tank in a lab. From left to right: exemplary input image, segmented input image, and results for two different camera-glass configurations. Note that the red camera trajectory and point cloud are the result of perspective reconstruction and the blue camera trajectory and point cloud were computed using the proposed refractive method. The result in the right image shows that perspective reconstruction failed, while the refractive method did not.
Image Image Image Image

Input image and resulting camera path and 3D point cloud from an underwater volcano near the Cape Verdes:
Image Image

In an underwater cave system in Yucatan, Mexico, archaeologists found a skull, which resulted in the following reconstruction:
Image Image Image



Automatic 3D Modelling of Excavation Sites

In this collaboration with the archaeology department of Kiel university, a scene reconstruction has been performed based upon two photographs only:
Image Image
The images are automatically matched
and robustly calibrated providing a sparse set of verified correspondences
Finally a depth map can be produced containing for each pixel the distance of the imaged object from the camera center:
This can then be "backprojected" to yield a 3D surface model either with the original colors
or with a pseudo-color to better visualize the geometrical shape:
Such models could automatically be created for each excavated layer to finally obtain a 4D (3D + time) model of the excavation site.


See the video: wmv low resolution (2.89 Mb) wmv medium resolution (20MB)
Kevin Koeser, July 2007



3D4YOU - Content Generation and Delivery for 3D Television

Project Description

The 3D4YOU project is a European research project to develop and establish standards for 3D Television in a wide range of aspects. Combining the expertise of different teams from across Europe, the goal of the project is to build a 3D Television production chain from content generation to end-consumer delivery.


Research Focus

The contribution of the Multimedia Information Processing Group of CAU University to the EU3D4YOU project is on the one hand in the calibration of the multi-camera capture setup as well as the generation of dense depth maps for 3D viewing devices at high definition quality.
To achieve reliable depth retrieval, a Time-of-Flight Depth Camera is integrated with about the same Field of View as the CCD Cameras to initially capture a low resolution depth map of the scenery. This low resolution depth map can then be refined with the help of high resolution color data.

The above image shows first results from a test sequence shot with the camera setup to the left. The low resolution depth map generated from the ToF-camera is warped into the left camera view and automatically refined to achieve a high resolution depth map.

One particular outcome of the project so far is a mixed reality systems exploiting ToF-Camera technology, using depth keying algorithms and depth layer segmentation.
The steps of the processing chain for the mixed reality system are:

  1. Environment model generation with Time-of-Flight camera.
  2. Camera pose determination
  3. Shadow computation
  4. Depth-keying and mixing
  5. Tracking of dynamic foreground objects and content alignment

This way mixed reality content with increased realism can be generated, while at the same time all necessary data for 3D-TV application is delivered.


Multi Camera System

Although under investigation the currently proposed camera system for capturing input data for the generation of 3D Television content consists of four CCD HDTV cameras in a multiview rig together with a Time-of-Flight depth camera, which can be used to initialize the computation of dense depth maps of the scenery at the resolution of the HDTV cameras. This seemingly complicated setup with additional satellite cameras is suitable for multi-view stereo approaches as well as for occlusion layer generation, yet staying as compact as possible.
The depth information will then be available for all the camera views in the multi-view rig as well as for interpolated views between them to allow for different 3D display types based on stereoscopic or autostereoscopic technology.


Further Information

For a more detailed overview on the 3D4YOU project and to visit our project partners, please visit the 3D4YOU Homepage.



MoSeS (Modular Sensor Software)



Von der Navigation in der Schifffahrt über die Sicherheits- und Überwachungstechnik bis zur Messtechnik, die Steuerung von eingebetteten Computersystemen durch Sensoren, die Informationen aus dem Umfeld aufnehmen und verarbeiten, nimmt deutlich zu. Bislang existieren aber eine Vielzahl an unterschiedlichen Softwarealgorithmen und Komponenten zur Verarbeitung der Informationen. Hinzu kommt, dass die Fusion unterschiedlicher Sensorinformationen - beispielsweise die Verbindung visueller Sensorik mit Beschleunigungssensoren oder GPS-Positionssensoren - bislang nur für Einzelfälle gelöst worden ist. Ziel dieses Projektes ist es daher, ein einheitliches Softwarekonzept zur sensorgestützten Informationsverarbeitung zu entwickeln, um die Fusion heterogener Sensorik zu ermöglichen.

Diese Arbeit wird durch das "Zukunftsprogramm Schleswig-Holstein (2007-2013)", die Europäische Kommission (EFRE) und das Land Schleswig-Holstein als Teil der Initiative KoSSE, Projekt 122-09-048, gefördert. 
Mehr Informationen zum Projekt "MoSeS" innerhalb des Kompetenzverbundes Software Systems Engineering erhalten Sie hier. 


Im Verlauf des Projektes wurde das Softwareframework "MoSeS" (Modular Sensor Software) entwickelt und in C/C++ programmiert. Das Framework stellt Module sowie eine Ablaufsteuerung zur Verfügung, mit welchen sich eine Vielzahl von Problemen aus dem Bereich der visuellen Navigation, Sensordatenverarbeitung und 3D-Szenenrekonstruktion prototypisch lösen lassen. Die Erstellung konkreter Anwendungen und Konfiguration der Module lässt sich über eine XML-Beschreibungssprache sowie eine grafische Benutzeroberfläche vornehmen. Der Programmablauf wird in einer separaten GUI dargestellt.

Erste konkrete Anwendung findet das Konzept bei der Inspektion von Kanalrohrsystemen und Kanalschächten, bei der die 3D-Bildanalyse einer Fischaugenkamera mit Positionsbestimmungen und Inertialsensorik verknüpft wird. Diese Anwendung wird in Kooperation mit der Firma IBAK Helmut Hunger GmbH & Co. KG untersucht.

Eine zweite konkrete Anwendung ist Stereo-SLAM - also die automatische Erstellung einer Umgebungskarte und gleichzeitige Lokalisierung darin - durch einen autonom agierenden Roboter mittels eines Stereo-Kamerasystems sowie Inertialsensorik. 


Editor Screenshot 1 
Screenshot des Editors zum Zusammenstellen von Anwendungen aus einzelnen Modulen.

Editor Screenshot 2 
Darstellung des "Data Pools" zur internen Datenverwaltung und -prozessierung zwischen Modulen.

Anwendung Screenshot 1 
Screenshots der GUI zur Ausführung und Überwachung von Anwendungen aus Modulen.

Anwendung Screenshot 2 
Visualisierung von Rekonstruktionsergebnissen einer "Structure from Motion"-Anwendung.



Multi-Camera Calibration

At the Multimedia Information Processing Group a powerful calibration software has been developed, which is capable to calibrate multiple cameras which are rigidly coupled in one single program (e.g. a stereo rig or a rig with many rigidly coupled cameras). This software is able to calibrate the intrinsic parameters and the extrinsic parameters of all cameras altogether. Additionally 3D-cameras can be calibrated together with the standard camera(s). The 3D-/ToF-/PMD-cameras suffer from systematic depth measurement errors which are modeled by a higher order function and are also estimated during the calibration process.

Kinect Support

Recently the Kinect cameras developed for Microsoft by PrimeSense have gained a lot of importance in the Computer Vision community. As the Kinect cameras also deliver depth images along with color images, Kinect cameras can be calibrated with the MIP - MultiCameraCalibration, which is therefore the first software which is capable to calibrate Kinect cameras together with other cameras.

To calibrate Kinect cameras depth and intensity images are needed. While taking the images for calibration a depth and intensity image has to be made for every checkerboard pose. The Kinect cameras cannot deliver both at the same time but it is possible to change the mode and take the two images one after the other. Please do not change the position of checkerboard or camera between taking depth and intensity image. The Kinect cameras deliver z-depth instead of ray lengths in the depth images. The MIP-MultiCameraCalibration internally uses ray lengths. So for a proper depth calibration the images have to be transformed to what we call "polar"-coordinate depth images. The function ProjectionParametersPerspective::TransformCartesianToPolarCoordinates() from the BIAS Software is used for this purpose. This is not absolutely necessary as the depth delivered by the Kinect cameras is quite reliable and a linear depth error model is sufficient.

The BIAS Software also supports the Kinect cameras. It can be used to directly capture images using the biasShowCamWx application. Note that the official and the open source drivers for the Kinect cameras are supported in BIAS. If the BIAS software is used to capture images the BIAS image format (.mip) can be used to store the depth images, otherwise the .pmd image format described below should be used.


This Software is an extension to the BIAS Software Library. This Software and
BIAS are distributed in the hope that they will be useful, but please note:


Permission is granted to any individual or institution to use, copy and distribute this software, provided that this complete copyright and permission notice is maintained intact, in all copies and supporting documentation.

Image Formats

  • For the intensity images most common file formats are supported. To name a few: mip, ppm, png, pgm, jpg.
  • For the ToF images only two formats are supported. Our own mip-file format where the distance values are stored in mm. and a xml-file format which includes distance-values (in m!), greyscale values and amplitude values. (See "pmd0000.pmd" for an example. This is a zipped file containig a ".pmd" file.)

Note that if the xml-file format is used, the files must have the suffix ".pmd" to be loaded properly. Eventually you will have to give the image list with the pmd images two times in the properties GUI at the beginning. (As pmd-Intensity and pmd-Depth images.)


Go to the Download Page.

General Notes

  • What we calibrate: Intrinsic and extrinsic camera parameters (focal length, principle point, skew/shear, radial/tangential lens distortion, rotation and translation)
  • The calibration approach is suitable for perspective and spherical (fisheye) cameras. The perspective case works very well and reliable, while the spherical case still has some known weaknesses. We use the method proposed by D. Scaramuzza to approximate the parameters and try to refine them afterwards. In this refinement unfrequent crashes occur.
  • The calibration algorithm is based on a planar checkerboard pattern. So if you want to calibrate your camera(s) you have to make pictures of a planar checkerboard pattern. The more pictures you make the more precise your result will be. It is also important that you cover all areas of your images with the calibration pattern. Especially the border regions of the images are mandatory if you want to have a precise result for radial and tangential lens distortion parameters. We typically use between 20 and 80 pictures per camera for calibration from different angles/ directions and distances.
  • If you want to calibrate more than one camera you have to take the pictures of the checkerboard with all cameras at the same moment. Remember the cameras should be rigidly coupled! Again many images help to produce better results.
  • If you want to calibrate a PMD-/ToF-/3D-camera and are interested in the depth error correction you have to take images of the calibration pattern at different distances. The distances should cover the whole operation area of the camera. Typical values are between 2.0m and 7.5m.
  • The checkerboard has to be visible and detectable in your images, so ensure that it is big enough to be identified even in some distance to the cameras.
  • The algorithm uses lists of images. The images have to be in the same order in the image lists. Images that are in the same position in the lists have to be taken simultaneously. The lists must have the same length. You can invalidate some images if, for example the checkerboard is not visible in that image. However you can't invalidate an image which belongs to the first camera in your rig, because the position and rotation of the other cameras is calculated relative to that first camera. (If you only want to calibrate one camera it is possible to invalidate pictures in this camera.)

Depth Error Model

In the current version of the Software, the depth error is modeled as a polynomial:


where λ* is the corrected depth, (x,y) are the image coordinates, λ is the measured depth and d0, ..., d5 are the parameters to estimate.

Besides the polynomial depth error model a spline model and linear error model are also implemented. The linear model consists only of an offset and an inclination parameter. The spline error model follows the proposed spline model introduced by Marvin Lindner and Andreas Kolb.

Marvin Lindner, Andreas Kolb:
Lateral and Depth Calibration of PMD-Distance Sensors
Advances in Visual Computing, Springer, 2, 2006, pp. 524-533.

You can choose which model to use in the project definition window.


If this software is used to produce scientific publications please add a reference to it in the reference section:

Ingo Schiller: MIP - Multi-Camera Calibration.

Further details can be found in the papers:

Marvin Lindner, Ingo Schiller, Andreas Kolb, Reinhard Koch:
Time-of-Flight Sensor Calibration for Accurate Range Sensing
Journal of Computer Vision and Image Understanding (CVIU) 114 (12) (2010), pp. 1318–1328. Bibtex
Ingo Schiller, Christian Beder, Reinhard Koch:
Calibration of a PMD-Camera using a Planar Calibration Pattern together with a Multi-Camera Setup
Proceedings of the XXI ISPRS Congress, Beijing July 3-11 2008 BibTeX


Starting the program will present a properties GUI where you can add the image lists and set some parameters.

  • Add image lists to the calibration project. Image lists are just textfiles containing the paths of the image files. You can use the button on the bottom right corner to make image lists.
  • When you add an image list you are asked if this is a spherical camera. This means if the FoV is ~180 degrees. In this case the software tries to estimate spline parameters after the model of Scaramuzza. If you use a standard perspective camera just press "No".
  • After that you are asked if you want to fix the intrinsic parameters. This mean you can load the parameters such as principal point, focal length and lens distortion from file. Therefor the parameters have to be given in our file format. (Tip: Calibrate one camera without fixed parameters then you have a point to start)
  • The depth distortion parameters are handled in a special way. If you load a file which includes depth parameters they are taken as fixed. If you load a file withour depth parameters or with all set to zero they are estimated. (Of course if you don't load anything they are estimated as well)
  • Select the number of the inner corners of your checkerboard. For example this checkerboard:

has xCorners=7 and yCorners=4

  • Select the size of one square of your checkerboard. That is for example the width of one single black rectangle in the image above.
  • Select whether to use Software rendering or GPU rendering. The GPU rendering only works in NVIDIA cards Geforce 7950+. You can try it, but I recommend to use the software rendering, although it is slower.
  • Choose which radial distortion model you wish to estimate, only the radial distortion parameters (Bouguet), radial and tangential distortion parameters or distortion parameters as Brown defined them (still experimental).
  • Select which depth error model you wish to use. Polynomial, Linear or Spline (still experimental).
  • Finally select whether your checkerboard is black or white in the top left corner.

Save you project before pressing ok or you will have to configure it again!


Corner Detection / Selection

  • After pressing ok, the configuration GUI will disappear and the calibration program is started with the main window and an image list.
  • You now have to select all inner corners of the checkerboard in the images.


  • The software tries to automatically detect the corners of the checkerboard. If the detection is successful the checkerboard will be marked in the image as displayed in the image above. If it could not be detected you have to click the outer four corners using the mouse. Left click sets a corner, right click lets you start over. Always select the corners in the same way. Corner 1 has to be the same physical corner in all images and the direction of selection has to be the same!
  • If you only click the outer four corners the other inner corners are interpolated. For cameras with big lens distortions it is advisable to select all corners. To do so check the "Select All Corners" checkbox. Then you have to select the corners rowwise, starting always at the left from top to bottom.
  • Later Versions of the software feature an auto-detection button in the menubar. Pressing it the software will detect corners in all images and mark the valid and invalid ones. It will stop if corners could not be detected in an image.
  • Later Versions also have an auto-invalidate feature which automatically invalidates images in which the corners could not be detected. Menuitem Corners->Auto Invalidate Images
  • It is also possible to scale the images for convenience and corner detection. Use the "Rescale" control in the menu bar. Sometimes the automatic corner detection works better if the images are enlarged or shrinked.
  • To navigate through the images, use "Previous" and "Next" or click on the image name in the image list.
  • If you want to skip an image, press "Invalidate"
  • After selecting all corners in all images save your corners by selecting the Menuitem Corners->SaveCorners.

Approximation / Calibration

  • After selection of all corners in all images press "Approximate". This will approximate the camera position relative to the checkerboard for each image and it will approximate the intrinsic camera parameters of all cameras.
  • Deprecated in versions > 0.8.5: After that press "Mean". This will make the mean of all relative orientations and positions of all cameras in the camera rig. If only one camera is used nothing will happen.


  • After that press "Calibrate". This starts the real calibration algorithm, minimizing the reprojection error in all images for all valid pixels.


  • You can follow that process looking at the reprojections by selecting from the menu: Reprojection->ShowReprojection.

Obtaining the Results

  • During and at the end of the calibration process you can save the calibration results by selecting Calibration->SaveRig from the menu.
  • The result will be a file in which all camera parameters are saved. This file is in the format of the BIAS::ProjectionParameters. See our software and documentation site for details. BIAS Software
  • The position and rotation of the first camera is set to zero. The positions and rotations of all other cameras are given relative to this first camera.


Verifying the Results

  • When the calibration is done you can save the images with the projections by selecting Calibration->Save Images with Poses from the Menu. This saves all images with the calibration parameters in the image headers in files "".
  • From version 0.8.8 the installation package comes with the BIASEpipolarGeometryViewer software. This software can be used to check the quality of the calibration. The Software takes two images and two projections (Camera Parameters) as input. If the Camera Parameters are included in the images (as it is in this case) giving two images is sufficient. It displays the images side-by-side when a point in the left image is clicked with the mouse it draws the epipolar line in the right image. If the calibration was successful the epipolar line in the right image should go through the point corresponding to the clicked point in the left image.

Good Luck!


The software is maintained by Ingo Schiller, please contact him of you have any questions or encounter errors. Please read the instructions in the previous pages before using the software and keep in mind that although the software delivers very precise results it is not a commercial product and errors may occur.

If you encounter an error executing the application, install the Microsoft Redistributable package, either from the start menu or download it here. This brings the necessary DLLs.

Version 1.0.0:

Windows 7: (use setup.exe to install!)

Changes to 0.9.2:

  • A lot of small bugfixes.
  • New versions of third party libraries such as GSL and LAPACK are now used which probably caused crashes on previous versions.


  • If images such as jpg, png, etc. are used which are loaded using ImageMagick it is essential that the correct libraries are used. Especially only (!) the correct IM_MOD_RL_xxx_.dll libraries are allowed in the path. If there are any other of these libraries in the paths, the application will crash while loading images.

Version 0.9.2

Windows 7: (use setup.exe to install!)

Changes to 0.9.1:

  • Now writes log messages and intermediate calibration files into user data folder.
  • New checkerboard detector which is more reliable on images with strong distortions (select Crossfilter in Corners menu).
  • New silent mode without annoying message boxes for unsupervised calibration.
  • Small bugfixes.


  • see version 1.0.0


Version 0.8.12:

Windows XP:

Ubuntu 8.10: MIPMultiCameraCalibration0.8.12.deb (experimental!)
Installs itself in /usr/local/bin/, executables are 'MIPMultiCameraCalibration','biasviewwx' and 'biasshowepiwx'.

OpenSuse 11.1, 32 Bit: MIPMultiCameraCalibration-0.8.12-1.i586.rpm (experimental!)
Installs itself in /usr/local/bin/, executables are 'MIPMultiCameraCalibration','biasviewwx' and 'biasshowepiwx'.

Changes to 0.8.10:

  • Now shows in the image list frame which image are already handled.
  • Fixed bug in display of depth deviation parameters.
  • Rotate corners which are detected inverse.
  • Autoskip feature for faster calibration (select Auto Invalidate Images in Corners menu).
  • Refine corners with corner detector (experimental!).

All downloads are ZIP files containing an installation package which should work on all Windows (XP, Vista, Windows 7) computers.

Older Research Projects

3D4YOU - Content Generation and Delivery for 3D Television
The 3D4YOU project is a European research project to develop and establish standards for 3D Television in a wide range of aspects. Combining the expertise of different teams from across Europe, the goal of the project is to build a 3D Television production chain from content generation to end-consumer delivery.


The aim of this project is the development of an Augmented Reality binocular system. The binocular overlays the visible image with information e.g. from sea charts to improve the nautical navigation.


Research Project: 3DPoseMap

Image Image
3D-Pose Estimation and Mapping with PMD (Photonic Mixing Device) Camera


Research Project: Artesas


Augmented Reality Technologies for Industrial Service Applications

The goal of this BMBF-funded project was to build a mobile Augmented Reality solution for complex service applications.


Research Project: Matris

Markerless Real-Time Tracking for Augmented Reality Image Synthesis


There are many applications in which it is necessary to overlay a computer-generated object onto a real scene in real-time, requiring accurate measurement of the position of the camera or headset. Existing methods require bulky hardware, severely limiting their usability. The objective of this project was to develop and implement a system for determining the position, orientation, and focal length of a camera in real-time, by analysis of the camera images and exploitation of unobtrusive inertial motion sensors. This enables the system as a whole to determine its location and orientation in a very natural way.


Research Project: IBAK Sewer Pipe Inspection


The goal of this research project was automatic reconstruction of the surface geometry of sewer pipes from camera images using image processing techniques in order to detect damages. The images were captured by a fisheye-lens camera mounted on a mobile robot while driving through the pipe.

This project was partly funded by Innovationsstiftung Schleswig-Holstein and took place in cooperation with IBAK Helmut Hunger GmbH & Co. KG.


Research Project: IBAK Sewer Shaft Profile Measuring


This follow-up project with IBAK was aimed at the robust measuring of sewer shaft profiles from images acquired by a fisheye-lens camera hanging into the shaft. From measurements of single profiles at different height, a full textured 3d reconstruction of the sewer shaft could be created which is helpful for off-site inspection of the shaft.


Research Project: Helicopter Camera


The goal of this research project was to develop a model helicopter with a mounted camera that automatically centers a defined object in the center of the camera view.


Research Project: INVENT


INVENT (german abbr. for "Intelligent Traffic and User-Friendly Technology") is a BMBF-funded project with goals in different areas like e.g. traffic safety or traffic management. We worked in close cooporation with the Daimler-Chryser AG on the subproject FUE (german abbr. for "Detection and Interpretation of the Driving Environment"). Our part in this subproject was the development of an intersection assistant for potentially dangerous inner city crossings.


Research Project: ORIGAMI


ORIGAMI is an EU-funded IST project with the goal of developing advanced tools and new production techniques for high-quality mixing of real and virtual content for film and TV productions. In particular, the project focused on pre-production tools for set extension through image-based 3D modeling of environments and objects.

We focused on solutions for the creation of 3D background models - in particular from uncalibrated cameras, depth estimation and geometrical 3D modeling and novel image based photorealistic rendering following a plenoptic rendering approach.

Research Project: 3D Reconstruction from Images

This research project aimed at automatic 3D reconstruction from images of a moving camera via a Structure from Motion approach. The project was presented on CeBIT 2003.