MSc Dissertation Proposal 2016/2017

Augmented Depth Map combining Kinect and Lytro Sensors

-- Project Description at Fenix:

Objectives

This thesis is integrated on the Augmented Human Assistance (AHA) Project that comprise several research teams (ISR, CMU, M-ITI) and companies (Plux, YDreams Robotics). The AHA project proposes the development and deployment of a novel Robotic Assistance Platform to alleviate the current and upcoming social, psychological and economic burden related to sedentarism and aging related morbidities.

 

The objectives of this work are:

1.       Plenoptic camera depth accuracy and calibration.

2.       Integration of the depth map estimated using the two sensors.

Requirements

N.A.

Place for conducting the work-proposal:

ISR / IST

Observations

The Kinect is a sensor designed for the consumer market of games and entertainment. It is also being used for research in mobile robotics and computer vision. This device is composed by a camera pair that consists in one color and one depth sensor. The Kinect V1 depth sensor consists of a structured light 3D scanner while the Kinect V2 depth sensor is based on Time-of-Flight concepts. According to Microsoft, the depth sensor range of the new version of Kinect (Kinect V2) is comprehended between 0.5 m to 4.5 m. The default depth range of the older version of Kinect (Kinect V1) is comprehended between 0.8 m to 4.0 m. Nonetheless, Herrera et al. showed that Kinect V1 gave results with higher uncertainty for depth measurements below 1.5 m even with the manufacturer calibration.

 

On the other hand, the appearance of the first commercial versions of plenoptic cameras (Lytro and Raytrix) had raise interest in these cameras by the research community. Also, some players like NVIDIA and Adobe had created prototypes using this technology to create new types of displays: near-eye displays and 3D displays. The plenoptic cameras sample the lightfield and give information about the direction and contribution of each ray to the total amount of light captured on an image. These cameras allow to obtain depth estimations using a single acquisition using algorithms based on the epipolar geometry. These plenoptic cameras suggest that they can be used for a short range of depths near the camera.

 

The workplan consists on evaluating the depth accuracy, at different depths, of several depth estimation methods using algorithms based on the epipolar geometry analysis. The depth map from the Lytro and Kinect camera should be integrated to obtain an augmented depth map that has an overall lower uncertainty.

 

-- More Information

 

More MSc dissertation proposals on Computer and Robot Vision in:

http://omni.isr.ist.utl.pt/~jag