House fly (left), courtesy of Armando Frazão.
Insect eyes and possible machine counterparts (right), from [Neumann05].
MSc dissertation proposal 2015/2016
Imaging
Through Twisted Optic Fiber Bundles
Introduction:
Current cameras are composed by a CCD and a lens to focus energy on the
CCD. Recent research has shown that by using an array of lenses it is possible
to improve the quality of images [Lytro_www]. This
kind of cameras may be the future of 3D TV.
Modelling cameras based on one array of lenses usually imply modelling
many small cameras. In a simpler representation, one may consider a collection
of photocells, the so called Discrete camera.
Discrete cameras are collections of pixels, photocells, organized as
pencils of lines with unknown topologies. Distinctly from common cameras,
discrete cameras can be formed just by some sparse, non regular, sets of
pixels.
Discrete cameras are interesting for robotic applications due to
allowing designs specific to the tasks at hand [Neumann05], but pose a
challenge right from the calibration point.
Recent research work has shown that discrete cameras, which can be moved
freely and have a central arrangement of the pixels, can be calibrated from
natural scenes [Grossmann10]. This MSc project focuses on building and
calibrating a discrete camera.
Objectives:
In this work the objectives are two fold: (i)
mounting a discrete camera combining a standard camera with a cable of optic fibers,
(ii) mounting the setup on top of a pan-tilt-basis (iii) calibrating the camera in order to
obtain images readable by humans.
Detailed description:
Conventional video cameras are built from CCD or CMOS sensors whose
pixels are organized in rectangular grids. Determining the intrinsic parameters
of a mobile camera without any assumptions about the imaged world is called
camera self- or auto-calibration [Hassanpour04]. More commonly, cameras are
static and one shows them a planar structured (chess) calibration pattern in
various poses, which is enough to perform the calibration [Bouguet-WWW].
Discrete cameras simply combine pixels in a fixed manner but without a
specific arrangement. Discrete cameras are interesting for robotic applications
due to allowing designs specific to the tasks at hand [Neumann05], but pose a
challenge right from the calibration point. Recent research work has shown that
discrete cameras, which can be moved freely and have a central arrangement of
the pixels, can be calibrated from natural scenes [Grossmann10, Galego13]. This
MSc project focuses on building and calibrating a discrete camera.
The construction of the camera will be based on a standard camera and a
standard lens. In between the camera and the lens one will insert a collection
of optical fibbers, rigidly glued to each other [Neumann05].
A number of calibration methodologies are available for discrete
cameras. The main idea is that neighbour photocells view approximately the same
direction of the world and thus have higher correlations of their time-signal
readings (pixel streams). As distinct from many conventional calibration
methods in use today, calibrating discrete cameras requires moving them within
a diversified (textured) natural world. This project follows approaches based
on information theory and computer learning methodologies.
The calibration methodology will be first developed based on simulation
and then using real cameras, namely cameras hand-held or pan-tilt-zoom mounted on
static basis or on mobile robots.
The main steps of the work are therefore the following:
- building a simulated camera that allows acquiring
calibration data
- testing the calibration of the simulated camera
- assembling the discrete central camera
- calibrating the assembled camera
References:
[Lytro_www] Lytro light field camera, https://www.lytro.com/camera/
[Bouguet-WWW] Jean-Yves
Bouguet, "Camera calibration toolbox for matlab", http://www.vision.caltech.edu/bouguetj/calib_doc/
[Hassanpour04] Camera auto-calibration using a
sequence of 2D images with small rotations, Reza Hassanpour,
Volkan Atalay, Pattern
Recognition Letters, Vol.25, Issue 9, 2 July 2004, Pages 989-997
[Agapito01] Agapito, L., Hayman,
E., Reid, I.D., 2001. Self calibration of rotating and
zooming cameras. Int. J. Comput. Vision 45(2), 107–127.
[Neumann05] "Compound Eye Sensor for 3D Ego
Motion Estimation", Jan Neumann, Cornelia Fermuller,
Yiannis Aloimonos, Vladimir
Brajovic, IROS 2005, see also
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.147.929
[Grossmann10] "Discrete camera calibration from
pixel streams", Etienne Grossmann, José António
Gaspar and Francesco Orabona, Computer Vision and
Image Understanding (Special issue on Omnidirectional Vision, Camera Networks
and Non-conventional Cameras), Volume 114, Issue 2, Pages 198-209, February
2010.
[Galego13] "Topological Auto-Calibration of
Central Imaging Sensors", R. Galego, R.
Ferreira, A. Bernardino, E. Grossmann and J. Gaspar, IbPRIA
2013
Requirements (grades, required courses, etc):
-
Expected results:
At the end of the work, the students will have enriched their experience
in computer vision. In particular are expected to develop and assess:
- algorithms for calibrating central cameras.
Place for conducting the work-proposal:
ISR / IST
More MSc dissertation
proposals on Computer and Robot Vision in:
http://omni.isr.ist.utl.pt/~jag