3D Vision and Mediated Reality

Principal Investigator: Prof. Joni Kamarainen

Our research interest to augmented reality originates from our interest on real-time camera tracking and 3D reconstruction. Our team proposed one of the very first real-time online monocular 3D reconstruction methods in 2008.

Later we have switched the focus from passive monocular RGB cameras to RGB-D cameras (Kinect kind sensors) and we are continuously looking for new hardware as well (time-of-flight, plenoptic etc.)

Recent important application fields of our methods are human-computer interaction (HCI computing) and robot vision.

Featured projects


In this project we investigate 3D computer vision methods of recognition, reconstruction etc. in large scale. That is, methods are used in domains that contain a lot of 3D and 2D data. Such environments are, for example, 2D and 3D maps with street view images.


The starting point of this project was to find very affordable solutions to adopt augmented reality techniques in film and online broadcasting (television) production. We aimed to boost the next generation video based "tweating" and help students to become familiar with augmented reality technology. In particular, we have developed very accurate and efficient (state-of-the-art quality) RGB-D (Kinect) base dense camera tracking and reconstruction. Our methods run on commodity laptop and perform camera tracking and reconstruction real-time online.


The RTMosaic project started from real-time video-based image mosaicking and the method was extended to real-time on-line feature-based camera tracking (monocular vSLAM) and 3D reconstruction. In particular, we looked for solutions running on commodity hardware (CPUs and GPUs).