Bio-Inspired Event-based Lightfield Reconstruction

 

Description

Lightfield cameras [Ng05,Perwass12] allow to acquire a point from different perspectives. While the different perspectives can be used to retrieve the 3D information of the scene, the large quantity of images demand a high storage space. For example, the lowest resolution plenoptic camera acquires a 382 x 381 sub-aperture image with 11 x 11 directions per pixel which is equivalent to eight 1980 x 1080 pixels standard images.

On the other hand, the recent event-based cameras [Delbruck11] mimics our retina [Mahowald88] behaviour by representing an image with spikes that report differences in image intensity. This allows a sparse representation of the image which allows to have higher acquisition rates and avoid the storage space requirements of lightfield cameras.

The work plan comprises extending a virtual environment for acquiring an event-based lightfield dataset [Honauer16,Mueggler17]. This dataset will be used to recover the initial lightfield images and recover the 3D structure of the scene. The scene can be either reconstructed from the lightfield images or event-based information.

 

Objectives

-         Study the camera models for lightfield and event-based cameras

-         Create an event-based lightfield framework mimicking the behaviour of retinas

-         Recover the lightfield image from the sparse event-based acquisition and perform 3D reconstruction.

 

References:

[Ng05] Ng, Ren. Digital light field photography. Stanford, CA: stanford university, 2006.

[Perwass12] Perwass, C., & Wietzke, L. (2012, February). Single lens 3D-camera with extended depth-of-field. In Human Vision and Electronic Imaging XVII (Vol. 8291, p. 829108). International Society for Optics and Photonics.

[Mahowald88] Mead, Carver A., and Misha A. Mahowald. "A silicon model of early visual processing." Neural networks 1.1 (1988): 91-97.

[Delbruck11] Berner, Raphael, and Tobi Delbruck. "Event-based pixel sensitive to changes of color and brightness." IEEE Transactions on Circuits and Systems I: Regular Papers 58.7 (2011): 1581-1590.

[Honauer16] Honauer, Katrin, et al. "A dataset and evaluation methodology for depth estimation on 4d light fields." Asian Conference on Computer Vision. Springer, Cham, 2016.

[Mueggler17] Mueggler, Elias, et al. "The event-camera dataset and simulator: Event-based data for pose estimation, visual odometry, and SLAM." The International Journal of Robotics Research 36.2 (2017): 142-149.

 

Requirements (grades, required courses, etc):

Current average grade >= 15

 

Place for conducting the work-proposal:

ISR / IST