Visual Anticipation of Collisions with Moving Objects (id 7260)

 

Objectives:

 

Plenoptic cameras allow estimating depth from a single image by analysis of their epipolar geometry. Combining depth and motion estimation allows anticipating where a moving object may be in a collision course with the camera.

 

The objectives of this work are:

1) Plenoptic camera model definition and calibration.

2) Depth range limits definition for a plenoptic camera.

3) Depth estimation using the camera epipolar geometry.

4) Mosaicking and 3D reconstruction and comparison with standard methods.

 

Requirements (grades, required courses, etc):

--

 

Localization:

ISR / IST

 

Observations:

 

Typical strategies to obtain a 3D reconstruction are based on shape from motion and shape from shading. 3D reconstruction is usually challenging due to the need of integrating large amounts of data within a short frame of time. Plenoptic cameras give more information than conventional cameras. Namely, these cameras give information about the direction and contribution of each ray to the total amount of light captured on an image. These cameras allow to estimate depth from a single image by analysis of their epipolar geometry, which may allow to overcome some of the current limitations of conventional cameras.

 

Plenoptic cameras make a trade-off between spatial and angular resolution which normally results in images with low spatial resolution. To overcome this limitation we aim to create a lightfield mosaic and perform a 3D reconstruction.

 

The workplan consists on the study of the plenoptic camera model and calibration, and implementation of a standard mosaicking and depth estimation algorithms for conventional and plenoptic cameras. These methods will be applied to real datasets. The results for the plenoptic camera will be compared to the results from a conventional camera.

 

More information about this project:

http://users.isr.ist.utl.pt/~jag/msc/msc_2017_2018.html