Multi-Camera Array Representation and Reconstruction
Description
One lightfield camera can be represented as an array of cameras. This array of cameras can be represented compactly by an intrinsic matrix with only 6 parameters [Dansereau13,Zhang18]. Recently, a dataset has been created using multiple lightfield cameras [MultiLytro18], which can be thought of as a multiple camera arrays. The current calibration strategies allows to represent these cameras by 6 parameters and an additional 6 parameters to represent the relative position of each camera. This creates a representation that grows with the number of cameras used in this multi-camera array. In this work, we want to investigate a camera model that allows to represent these cameras in a compact way and that does not grow with the cameras considered in the array. This model will allow to adapt one of the current methods used for reconstruction [Benchmark16] to these multi-camera arrays.
Objectives:
- Study the camera models for lightfield cameras.
- Acquire a dataset considering a static scene and a single camera for calibration.
- Calibrate multiple lightfield cameras simultaneously
- Reconstruct scene using multiple cameras.
References:
[MultiLytro18] ]http://lightfields.stanford.edu/mvlf/
[Dansereau13] Dansereau, Donald G., Oscar Pizarro, and Stefan B. Williams. "Decoding, calibration and rectification for lenselet-based plenoptic cameras." Proceedings of the IEEE conference on computer vision and pattern recognition. 2013.
[Zhang18] Zhang, Qi, et al. "A Generic Multi-Projection-Center Model and Calibration Method for Light Field Cameras." IEEE transactions on pattern analysis and machine intelligence (2018).
[Benchmark16] http://hci-lightfield.iwr.uni-heidelberg.de/
Requirements (grades, required courses, etc):
Current average grade >= 15
Place for conducting the work-proposal:
ISR / IST