Multimedia Information Processing

Multi-Camera Calibration

At the Multimedia Information Processing Group a powerful calibration software has been developed, which is capable to calibrate multiple cameras which are rigidly coupled in one single program (e.g. a stereo rig or a rig with many rigidly coupled cameras). This software is able to calibrate the intrinsic parameters and the extrinsic parameters of all cameras altogether. Additionally 3D-cameras can be calibrated together with the standard camera(s). The 3D-/ToF-/PMD-cameras suffer from systematic depth measurement errors which are modeled by a higher order function and are also estimated during the calibration process.

Kinect Support

Recently the Kinect cameras developed for Microsoft by PrimeSense have gained a lot of importance in the Computer Vision community. As the Kinect cameras also deliver depth images along with color images, Kinect cameras can be calibrated with the MIP - MultiCameraCalibration, which is therefore the first software which is capable to calibrate Kinect cameras together with other cameras.

To calibrate Kinect cameras depth and intensity images are needed. While taking the images for calibration a depth and intensity image has to be made for every checkerboard pose. The Kinect cameras cannot deliver both at the same time but it is possible to change the mode and take the two images one after the other. Please do not change the position of checkerboard or camera between taking depth and intensity image. The Kinect cameras deliver z-depth instead of ray lengths in the depth images. The MIP-MultiCameraCalibration internally uses ray lengths. So for a proper depth calibration the images have to be transformed to what we call "polar"-coordinate depth images. The function ProjectionParametersPerspective::TransformCartesianToPolarCoordinates() from the BIAS Software is used for this purpose. This is not absolutely necessary as the depth delivered by the Kinect cameras is quite reliable and a linear depth error model is sufficient.

The BIAS Software also supports the Kinect cameras. It can be used to directly capture images using the biasShowCamWx application. Note that the official and the open source drivers for the Kinect cameras are supported in BIAS. If the BIAS software is used to capture images the BIAS image format (.mip) can be used to store the depth images, otherwise the .pmd image format described below should be used.


This Software is an extension to the BIAS Software Library. This Software and
BIAS are distributed in the hope that they will be useful, but please note:


Permission is granted to any individual or institution to use, copy and distribute this software, provided that this complete copyright and permission notice is maintained intact, in all copies and supporting documentation.

Image Formats

  • For the intensity images most common file formats are supported. To name a few: mip, ppm, png, pgm, jpg.
  • For the ToF images only two formats are supported. Our own mip-file format where the distance values are stored in mm. and a xml-file format which includes distance-values (in m!), greyscale values and amplitude values. (See "pmd0000.pmd" for an example. This is a zipped file containig a ".pmd" file.)

Note that if the xml-file format is used, the files must have the suffix ".pmd" to be loaded properly. Eventually you will have to give the image list with the pmd images two times in the properties GUI at the beginning. (As pmd-Intensity and pmd-Depth images.)


Go to the Download Page.

General Notes

  • What we calibrate: Intrinsic and extrinsic camera parameters (focal length, principle point, skew/shear, radial/tangential lens distortion, rotation and translation)
  • The calibration approach is suitable for perspective and spherical (fisheye) cameras. The perspective case works very well and reliable, while the spherical case still has some known weaknesses. We use the method proposed by D. Scaramuzza to approximate the parameters and try to refine them afterwards. In this refinement unfrequent crashes occur.
  • The calibration algorithm is based on a planar checkerboard pattern. So if you want to calibrate your camera(s) you have to make pictures of a planar checkerboard pattern. The more pictures you make the more precise your result will be. It is also important that you cover all areas of your images with the calibration pattern. Especially the border regions of the images are mandatory if you want to have a precise result for radial and tangential lens distortion parameters. We typically use between 20 and 80 pictures per camera for calibration from different angles/ directions and distances.
  • If you want to calibrate more than one camera you have to take the pictures of the checkerboard with all cameras at the same moment. Remember the cameras should be rigidly coupled! Again many images help to produce better results.
  • If you want to calibrate a PMD-/ToF-/3D-camera and are interested in the depth error correction you have to take images of the calibration pattern at different distances. The distances should cover the whole operation area of the camera. Typical values are between 2.0m and 7.5m.
  • The checkerboard has to be visible and detectable in your images, so ensure that it is big enough to be identified even in some distance to the cameras.
  • The algorithm uses lists of images. The images have to be in the same order in the image lists. Images that are in the same position in the lists have to be taken simultaneously. The lists must have the same length. You can invalidate some images if, for example the checkerboard is not visible in that image. However you can't invalidate an image which belongs to the first camera in your rig, because the position and rotation of the other cameras is calculated relative to that first camera. (If you only want to calibrate one camera it is possible to invalidate pictures in this camera.)

Depth Error Model

In the current version of the Software, the depth error is modeled as a polynomial:


where λ* is the corrected depth, (x,y) are the image coordinates, λ is the measured depth and d0, ..., d5 are the parameters to estimate.

Besides the polynomial depth error model a spline model and linear error model are also implemented. The linear model consists only of an offset and an inclination parameter. The spline error model follows the proposed spline model introduced by Marvin Lindner and Andreas Kolb.

Marvin Lindner, Andreas Kolb:
Lateral and Depth Calibration of PMD-Distance Sensors
Advances in Visual Computing, Springer, 2, 2006, pp. 524-533.

You can choose which model to use in the project definition window.


If this software is used to produce scientific publications please add a reference to it in the reference section:

Ingo Schiller: MIP - Multi-Camera Calibration.

Further details can be found in the papers:

Marvin Lindner, Ingo Schiller, Andreas Kolb, Reinhard Koch:
Time-of-Flight Sensor Calibration for Accurate Range Sensing
Journal of Computer Vision and Image Understanding (CVIU) 114 (12) (2010), pp. 1318–1328. Bibtex
Ingo Schiller, Christian Beder, Reinhard Koch:
Calibration of a PMD-Camera using a Planar Calibration Pattern together with a Multi-Camera Setup
Proceedings of the XXI ISPRS Congress, Beijing July 3-11 2008 BibTeX


Starting the program will present a properties GUI where you can add the image lists and set some parameters.

  • Add image lists to the calibration project. Image lists are just textfiles containing the paths of the image files. You can use the button on the bottom right corner to make image lists.
  • When you add an image list you are asked if this is a spherical camera. This means if the FoV is ~180 degrees. In this case the software tries to estimate spline parameters after the model of Scaramuzza. If you use a standard perspective camera just press "No".
  • After that you are asked if you want to fix the intrinsic parameters. This mean you can load the parameters such as principal point, focal length and lens distortion from file. Therefor the parameters have to be given in our file format. (Tip: Calibrate one camera without fixed parameters then you have a point to start)
  • The depth distortion parameters are handled in a special way. If you load a file which includes depth parameters they are taken as fixed. If you load a file withour depth parameters or with all set to zero they are estimated. (Of course if you don't load anything they are estimated as well)
  • Select the number of the inner corners of your checkerboard. For example this checkerboard:

has xCorners=7 and yCorners=4

  • Select the size of one square of your checkerboard. That is for example the width of one single black rectangle in the image above.
  • Select whether to use Software rendering or GPU rendering. The GPU rendering only works in NVIDIA cards Geforce 7950+. You can try it, but I recommend to use the software rendering, although it is slower.
  • Choose which radial distortion model you wish to estimate, only the radial distortion parameters (Bouguet), radial and tangential distortion parameters or distortion parameters as Brown defined them (still experimental).
  • Select which depth error model you wish to use. Polynomial, Linear or Spline (still experimental).
  • Finally select whether your checkerboard is black or white in the top left corner.

Save you project before pressing ok or you will have to configure it again!


Corner Detection / Selection

  • After pressing ok, the configuration GUI will disappear and the calibration program is started with the main window and an image list.
  • You now have to select all inner corners of the checkerboard in the images.


  • The software tries to automatically detect the corners of the checkerboard. If the detection is successful the checkerboard will be marked in the image as displayed in the image above. If it could not be detected you have to click the outer four corners using the mouse. Left click sets a corner, right click lets you start over. Always select the corners in the same way. Corner 1 has to be the same physical corner in all images and the direction of selection has to be the same!
  • If you only click the outer four corners the other inner corners are interpolated. For cameras with big lens distortions it is advisable to select all corners. To do so check the "Select All Corners" checkbox. Then you have to select the corners rowwise, starting always at the left from top to bottom.
  • Later Versions of the software feature an auto-detection button in the menubar. Pressing it the software will detect corners in all images and mark the valid and invalid ones. It will stop if corners could not be detected in an image.
  • Later Versions also have an auto-invalidate feature which automatically invalidates images in which the corners could not be detected. Menuitem Corners->Auto Invalidate Images
  • It is also possible to scale the images for convenience and corner detection. Use the "Rescale" control in the menu bar. Sometimes the automatic corner detection works better if the images are enlarged or shrinked.
  • To navigate through the images, use "Previous" and "Next" or click on the image name in the image list.
  • If you want to skip an image, press "Invalidate"
  • After selecting all corners in all images save your corners by selecting the Menuitem Corners->SaveCorners.

Approximation / Calibration

  • After selection of all corners in all images press "Approximate". This will approximate the camera position relative to the checkerboard for each image and it will approximate the intrinsic camera parameters of all cameras.
  • Deprecated in versions > 0.8.5: After that press "Mean". This will make the mean of all relative orientations and positions of all cameras in the camera rig. If only one camera is used nothing will happen.


  • After that press "Calibrate". This starts the real calibration algorithm, minimizing the reprojection error in all images for all valid pixels.


  • You can follow that process looking at the reprojections by selecting from the menu: Reprojection->ShowReprojection.

Obtaining the Results

  • During and at the end of the calibration process you can save the calibration results by selecting Calibration->SaveRig from the menu.
  • The result will be a file in which all camera parameters are saved. This file is in the format of the BIAS::ProjectionParameters. See our software and documentation site for details. BIAS Software
  • The position and rotation of the first camera is set to zero. The positions and rotations of all other cameras are given relative to this first camera.


Verifying the Results

  • When the calibration is done you can save the images with the projections by selecting Calibration->Save Images with Poses from the Menu. This saves all images with the calibration parameters in the image headers in files "".
  • From version 0.8.8 the installation package comes with the BIASEpipolarGeometryViewer software. This software can be used to check the quality of the calibration. The Software takes two images and two projections (Camera Parameters) as input. If the Camera Parameters are included in the images (as it is in this case) giving two images is sufficient. It displays the images side-by-side when a point in the left image is clicked with the mouse it draws the epipolar line in the right image. If the calibration was successful the epipolar line in the right image should go through the point corresponding to the clicked point in the left image.

Good Luck!


The software is maintained by Ingo Schiller, please contact him of you have any questions or encounter errors. Please read the instructions in the previous pages before using the software and keep in mind that although the software delivers very precise results it is not a commercial product and errors may occur.

If you encounter an error executing the application, install the Microsoft Redistributable package, either from the start menu or download it here. This brings the necessary DLLs.

Version 1.0.0:

Windows 7: (use setup.exe to install!)

Changes to 0.9.2:

  • A lot of small bugfixes.
  • New versions of third party libraries such as GSL and LAPACK are now used which probably caused crashes on previous versions.


  • If images such as jpg, png, etc. are used which are loaded using ImageMagick it is essential that the correct libraries are used. Especially only (!) the correct IM_MOD_RL_xxx_.dll libraries are allowed in the path. If there are any other of these libraries in the paths, the application will crash while loading images.

Version 0.9.2

Windows 7: (use setup.exe to install!)

Changes to 0.9.1:

  • Now writes log messages and intermediate calibration files into user data folder.
  • New checkerboard detector which is more reliable on images with strong distortions (select Crossfilter in Corners menu).
  • New silent mode without annoying message boxes for unsupervised calibration.
  • Small bugfixes.


  • see version 1.0.0


Version 0.8.12:

Windows XP:

Ubuntu 8.10: MIPMultiCameraCalibration0.8.12.deb (experimental!)
Installs itself in /usr/local/bin/, executables are 'MIPMultiCameraCalibration','biasviewwx' and 'biasshowepiwx'.

OpenSuse 11.1, 32 Bit: MIPMultiCameraCalibration-0.8.12-1.i586.rpm (experimental!)
Installs itself in /usr/local/bin/, executables are 'MIPMultiCameraCalibration','biasviewwx' and 'biasshowepiwx'.

Changes to 0.8.10:

  • Now shows in the image list frame which image are already handled.
  • Fixed bug in display of depth deviation parameters.
  • Rotate corners which are detected inverse.
  • Autoskip feature for faster calibration (select Auto Invalidate Images in Corners menu).
  • Refine corners with corner detector (experimental!).

All downloads are ZIP files containing an installation package which should work on all Windows (XP, Vista, Windows 7) computers.