Loading...
 

Andreas Jordt

Address Institut für Informatik
Christian-Albrechts-Universität Kiel
Hermann-Rodewald-Str. 3
D-24098 Kiel
Germany
Phone+49-431-880 4841
Fax+49-431-880 4054
Email jordt_at_mip.informatik.uni-kiel.de
Room onHermann-Rodewald-Straße 3, Room 310
Office hoursby appointment





My current research is focused on deformation reconstruction from depth and color video. The algorithmic key aspects are deformation modeling, occlusion handling as well as efficient optimization when approaching the deformation tracking task with "Analysis by Synthesis" methods. The application of these algorithms reach from robot grasping and handling tasks and interactive object tracking to automatic live stock monitoring.

Publications

2014

An Adaptable Robot Vision System Performing Manipulation Actions With Flexible Objects

Leon Bodenhagen, Andreas R. Fugl, Andreas Jordt, Morten Willatzen, Knud A. Andersen, Martin M. Olsen, Reinhard Koch, Henrik G. Petersen, and Norbert Krüger: IEEE Transactions on Automation Science and Engineering, Vol. 11, No. 3, July 2014

Abstract: This paper describes an adaptable system which is able to perform manipulation operations (such as Peg-in-Hole or Laying-Down actions) with flexible objects. As such objects easily change their shape significantly during the execution of an action, traditional strategies, e.g., for solve path-planning problems, are often not applicable. It is therefore required to integrate visual tracking and shape reconstruction with a physical modeling of the materials and their deformations as well as action learning techniques. All these different submodules have been integrated into a demonstration platform, operating in real-time. Simulations have been used to boot strap the learning of optimal actions, which are subsequently improved through real-world executions. To achieve reproducible results, we demonstrate this for casted silicone test objects of regular shape. Read more...
Links: Bibtex
Image

2013

Reconstruction of Deformation from Depth and Color Video with Explicit Noise Models

Andreas Jordt, Reinhard Koch: Time-of-Flight and Depth Imaging: Sensors, Algorithms, and Applications (LNCS 8200), Pages 128--146, Springer Verlag 2013, ISBN 978-3-642-44963-5

Abstract: Depth sensors like ToF cameras and structured light devices provide valuable scene information, but do not provide a stable base for optical flow or feature movement calculation because the lack of texture information makes depth image registration very challenging. Approaches associating depth values with optical flow or feature movement from color images try to circumvent this problem, but suffer from the fact that color features are often generated at edges and depth discontinuities, areas in which depth sensors inherently deliver unstable data. Using deformation tracking as an application, this article will discuss the benefits of Analysis by Synthesis (AbS) while approaching the tracking problem and how it can be used to exploit the complete image information of depth and color images in the tracking process, avoid feature calculation and, hence, the need for outlier handling, and regard every measurement with respect to its accuracy and expected deviation.
In addition to an introduction to AbS based tracking, a novel approach to handle noise and inaccuracy is proposed, regarding every input measurement according to its accuracy and noise characteristics. The method is especially useful for time of flight cameras since it allows to take the correlation between pixel noise and the measured amplitude into account. A set of generic and specialized deformation models is discussed as well as an efficient way to synthesize and to optimize high dimensional models. The resulting applications range from real-time deformation reconstruction methods to very accurate deformation retrieval using models of 100 dimensions and more. Read more...
Links: Full Paper (Self-Archived Version), Bibtex
The final publication is available at www.springerlink.com
Image



Flexpad: Highly Flexible Bending Interactions for Projected Handheld Displays

Jürgen Steimle, Andreas Jordt, Pattie Maes: ACM International Conference on Human Factors in Computing (CHI 2013). ACM Press.
(Best Paper Honorable Mention Award)
Abstract:
Flexpad is an interactive system that combines a depth camera and a projector to transform sheets of plain paper or foam into flexible, highly deformable and spatially aware handheld displays. We present a novel approach for tracking deformed surfaces from depth images in real time. It captures deformations in high detail, is very robust to occlusions created by the user’s hands and fingers and does not require any kind of markers or visible texture. As a result, the display is considerably more deformable than previous work on flexible handheld displays, enabling novel applications that leverage the high expressiveness of detailed deformation. We illustrate these unique capabilities through three application examples: curved cross-cuts in volumetric images, deforming virtual paper characters, and slicing through time in videos. Results from two user studies show that our system is capable of detecting complex deformations and that users are able to perform them fast and precisely. Read more...
Links: Full Paper, Bibtex

2012

Direct Model-based Tracking of 3D Object Deformations in Depth and Color Video

Andreas Jordt, Reinhard Koch: International Journal Of Computer Vision (IJCV). Springer Verlag.

Abstract:
The tracking of deformable objects using video data is a demanding research topic due to the inherent ambiguity problems, which can only be solved using additional assumptions about the deformation. Image feature points, commonly used to approach the deformation problem, only provide sparse information about the scene at hand. In this paper a tracking approach for deformable objects in color and depth video is introduced that does not rely on feature points or optical flow data but employs all the input image information available to and a suitable deformation for the data at hand. A versatile NURBS based deformation space is defined for arbitrary complex triangle meshes, decoupling the object surface complexity from the complexity of the deformation. An effcient optimization scheme is introduced that is able to calculate results in real-time (25 Hz). Extensive synthetic and real data tests of the algorithm and its features show the reliability of this approach. Read more...
Links: Full Paper (Submission Version), Bibtex
The final publication is available at www.springerlink.com
Flash player not available.


Estimation of Material Properties and Pose for Deformable Objects from Depth and Color Images

Andreas R. Fugl, Andreas Jordt, Henrik G. Petersen, Morten Willatzen, Reinhard Koch: In Pattern Recognition, Proceedings of the DAGM/OAGM 2012, Springer Verlag, LNCS 7476, Pages 165--174.

Abstract:
In this paper we consider the problem of estimating 6D pose, material properties and deformation of an object grasped by a robot gripper. To estimate the parameters we minimize an error function incorporating visual and physical correctness. Through simulated and real-world experiments we demonstrate that we are able to and realistic 6D poses and elasticity parameters like Young’s modulus. This makes it possible to perform subsequent manipulation tasks, where accurate modelling of the elastic behaviour is important.
Links: Bibtex
Image

2011

Fast Tracking of Deformable Objects in Depth and Colour Video

Andreas Jordt, Reinhard Koch, BMVC 2011, Proceedings of the British Machine Vision Conference.
Abstract: One challenge in computer vision is the joint reconstruction of deforming objects from colour and depth videos. So far, a lot of research has focused on deformation reconstruction based on colour images only, but as range cameras like the recently released Kinect become more and more common, the incorporation of depth information becomes feasible.In this article a new method is introduced to track object deformation in depth and colour image data. A NURBS based deformation function allows to decouple the geometrical object complexity from the complexity of the deformation itself, providing a low dimensional space to describe arbitrary ’realistic’ deformations. While modelling the tracking objective as an analysis by synthesis problem, which is robust but usually computationally expensive, a set of optimisations is introduced, allowing a very fast calculation of the resulting error function. With a fast semi-global search a system is established that is capable of tracking complex deformations of large objects (6000 triangles and more) with more than 6Hz on a common desktop machine. The algorithm is evaluated using simulated and real data, showing the robustness and performance of the approach. Read more...
Links: Full Paper, Bibtex
Flash player not available.

An Outline for an intelligent System performing Peg-in-Hole Actions with flexible Objects

Andreas Jordt, Andreas R. Fugl, Leon Bodenhagen, Morten Willatzen, Reinhard Koch, Henrik G. Petersen, Knud A. Andersen, Martin M. Olsen, Norbert Krueger: ICIRA 2011, Proceedings of the International Conference on Intelligent Robotics and Applications.


Abstract:
We describe the outline of an adaptable system which is able to perform grasping and peg-in-hole actions with flexible objects. The system makes use of visual tracking and shape reconstruction, physical modeling of flexible material and learning based on a kernel density approach. We show results for the different sub-modules in simulation as well as real world data. Read more...

Links: Full Paper, Bibtex
Flash player not available.

2010

High Resolution Object Deformation Reconstruction with Active Range Sensor

Andreas Jordt, Ingo Schiller, Johannes Bruenger, Reinhard Koch: In Pattern Recognition, Proceedings of the DAGM 2010, Springer Verlag, LNCS 6376, Pages 543--552.

Abstract:
This contribution discusses the 3D reconstruction of deformable freeform surfaces with high spatial and temporal resolution. These are conicting requirements, since high-resolution surface scanners typically cannot achieve high temporal resolution, while high-speed range cameras like the Time-of-Flight (ToF) cameras capture depth at 25 fps but have a limited spatial resolution. We propose to combine a high-resolution surface scan with a ToF-camera and a color camera to achieve both requirements. The 3D surface deformation is modeled by a NURBS surface that approximates the object surface and estimates the 3D object motion and local 3D deformation from the ToF and color camera data. A set of few NURBS control points can faithfully model the motion and deformation and will be estimated from the ToF and color data with high accuracy. The contribution will focus on the estimation of the 3D deformation NURBS from the ToF and color data. Read more...
Links: Full Paper, Bibtex
Flash player not available.

Accelerating Neuro-Evolution by Compilation to Native Machine Code

Nils T. Siebel, Andreas Jordt, Gerald Sommer:Proceedings of the IJCNN 2010, (IEEE World Congress on Computational Intelligence), ISBN 978-1-4244-6916-1, pages 1--8.

Abstract:
Any neuro-evolutionary algorithm that solves complex problems needs to deal with the issue of computational complexity. We show how a neural network (feed-forward, recurrent or RBF) can be transformed and then compiled in order to achieve fast execution speeds without requiring dedicated hardware like FPGAs. The compiled network uses a simple external data structure-a vector-for its parameters. This allows the weights of the neural network to be optimised by the evolutionary process without the need to re-compile the structure. In an experimental comparison our method effects a speedup of factor 5-10 compared to the standard method of evaluation (i.e., traversing a data structure with optimised C++ code).
Links: Bibtex
Image

2009

Compiling Neural Networks for Fast Neuro-Evolution

Nils T. Siebel,Andreas Jordt, Gerald Sommer: In Proceedings of the 2nd International Workshop on Evolutionary and Reinforcement Learning for Autonomous Robot Systems (IROS 2009 workshop), St. Louis, USA, October 2009, pp. 23--29.

Abstract:
Any neuro-evolutionary algorithm that solves complex problems needs to deal with the issue of computational complexity. We show how a neural network (feed-forward, recurrent or RBF) can be transformed and then compiled in order to achieve fast execution speeds without requiring dedicated hardware like FPGAs. In an experimental comparison our method effects a speedup of factor 5–10 compared to the standard method of evaluation (i.e., traversing a data structure with optimised C++ code). Read more...

Links: Full Paper, Bibtex
Image

Automatic High-Precision Self-Calibration of Camera-Robot Systems

Andreas Jordt, Nils T. Siebel, Gerald Sommer: In Proceedings of 2009 IEEE International Conference on Robotics and Automation (ICRA 2009), Kobe, Japan, pages 244--1249, ISBN 987-1-4244-2789-5, May 2009.
Abstract: In this article a new method is presented to obtain a full and precise calibration of camera-robot systems with eye-in-hand cameras. It achieves a simultaneous and numerically stable calibration of intrinsic and extrinsic camera parameters by analysing the image coordinates of a single point marker placed in the environment of the robot. The method works by first determining a rough initial estimate of the camera pose in the tool coordinate frame. This estimate is then used to generate a set of uniformly distributed calibration poses from which the object is visible. The measurements obtained in these poses are then used to obtain the exact parameters with CMA-ES (Covariance Matrix Adaptation Evolution Strategy), a derandomised variant of an evolution strategy optimiser. Minimal claims on the surrounding area and flexible handling of environmental and kinematical limitations make this method applicable to a range of robot setups and camera models. The algorithm runs autonomously without supervision and does not need manual adjustments. Our problem formulation is directly in the 3D space which helps in minimising the resulting calibration errors in the robot’s task space. Both simulations and experimental results with a real robot show a very good convergence and high repeatability of calibration results without requiring user-supplied initial estimates of the calibration parameters. Read more...
Links: Full Paper, Bibtex
Image

Diploma Thesis

Selbstkalibrierung eines Kamera-Robotersystems,
Cognitive Systems Group, Institute of Computer Science,
University of Kiel, 2009


Software

In order to have a fast prototyping and visualization tool I implemented mipEdit3D, a software based on BIAS that is able to edit 2D and 3D content, process data in realtime and provide visualization tools, including camera animation, on screen / off screen rendering and support of several hardware camera types.
It features a plugin structure that allows to dynamically load plugins during runtime, including the possibility to switch between mipEdit3D and plugin source code to automatically recompile, reload and seamlessly reintegrate it while the mipEdit3D tool chain using it is still running.
An ICE based communication plugin suite allows to communicate between mipEdit3D instances and external programs via network, allowing to create distributed processing chains in local networks and via internet.
Image


Teaching




Created by admin. Last Modification: Friday 11 of July, 2014 14:26:27 CEST by jordt.