Scene Understanding using Lightfield Cameras (id 17437)

 

 

Description

 

Lightfield cameras can be represented as an array of cameras. This array of cameras can be represented compactly by an intrinsic matrix with only 6 parameters [Dansereau13,Zhang18]. However, approaches that allow to perform self-calibration of these cameras are scarce. Furthermore, these cameras allow to recover the position of a point in the scene in a single acquisition. In this work, we aim at developing a self-calibration procedure for lightfield cameras for example based on lines in the scene [Hartley2003], and study the reconstruction capabilities of a recently proposed deep neural network [Shin18].

 

Objectives

 

- Study the camera models for lightfield cameras.

- Calibrate lightfield camera from lines in the scene.

- Scene reconstruction using epipolar plane images geometry networks.

 

References:

 

[Dansereau13] Dansereau, Donald G., Oscar Pizarro, and Stefan B. Williams. "Decoding, calibration and rectification for lenselet-based plenoptic cameras." Proceedings of the IEEE conference on computer vision and pattern recognition. 2013.

[Zhang18] Zhang, Qi, et al. "A Generic Multi-Projection-Center Model and Calibration Method for Light Field Cameras." IEEE transactions on pattern analysis and machine intelligence (2018).

[Hartley2003] Hartley, Richard, and Andrew Zisserman. Multiple view geometry in computer vision. Cambridge university press, 2003.

[Shin18] Shin, Changha, et al. "EPINET: A Fully-Convolutional Neural Network Using Epipolar Geometry for Depth from Light Field Images." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.

 

Place for conducting the work-proposal:

 

ISR / IST

 

 

 

More MSc dissertation proposals on Computer and Robot Vision in:

http://users.isr.tecnico.ulisboa.pt/~jag/msc/msc_2018_2019.html