Mobile Cameras: Pan-tilt-zoom cameras and cameras mounted on mobile robots

 

MSc dissertation proposal 2010/2011

 

Mobile Cameras Calibration

 

 

 

Introduction:

 

Conventional calibration of video cameras assumes static cameras in front of which one shows a structured calibration pattern in various poses [Bouguet-WWW]. Nowadays, many cameras have motion degrees of freedom, e.g. the pan-tilt-zoom cameras, or are simply mounted on top of robot arms or mobile robots. These motion degrees of freedom can be used to the advantage of the calibration process by releasing the need of using calibration patterns and showing them at various poses.

 

 

Objectives:

 

In this work the main objective is to explore the calibration of (i) constrained motion (e.g. pan-tilt-zoom) cameras, and (ii) unconstrained motion (hand-held) cameras. In both cases, cameras are expected to be calibrated intrinsically (focal length, scaling, principal point) by assuming static scenarios.

 

 

Detailed description:

 

 

Determining the intrinsic parameters of a mobile camera without any assumptions about the imaged world is called camera self- or auto-calibration [Hassanpour04]. While capturing a sequence of images, the camera motion can be either general or restricted, depending on the camera degrees of freedom. For instance, a pan-tilt camera can rotate but cannot translate, while a camera mounted on a helicopter can both rotate and translate. In both motion cases, general or restricted, there are some specific motions, known as critical motions, where no unique solution can be found for the camera parameters (see e.g. [Sturm97]). Nevertheless, the non-critical cases are more than enough for many interesting commercial applications, such as the augmented reality in the movies industry [2d3-WWW].

 

Considering the non-critical cases, various calibration methodologies exist for both general and restricted motion. For example [Hartley94, Hartley00] gives a practical calibration solution for pan-tilt cameras based on the rearrangement of the composition of rotation matrices. [Agapito01] further extends the methodology to the case of pan-tilt cameras that can also zoom (i.e. vary their intrinsic parameters). In the case of the general motion, one finds methodologies based on the "absolute quadric", "Kruppa equations", "essential matrix" decomposition, etc (see more details in [Hassanpour04]).

 

In this work proposal is expected to test a number of referred calibration methodologies. The testing methodology will be first based on simulation and in some cases using real cameras (hand-held or pan-tilt-zoom mounted on static basis or on mobile robots). The main steps of the work are therefore the following:

- build a simulated setup (VRML world with a controlled camera / viewpoint)

- extraction and matching (correspondence) of image features (e.g. SIFT [Lowe04, Lowe-WWW])

- estimation of fixed intrinsic parameters on a pan-tilt camera

- estimation of the intrinsic parameters on a pan-tilt-zoom camera

- estimation of the intrinsic and extrinsic (up-to a scale factor) parameters of a hand-held camera

 

 

References:

 

[Bouguet-WWW] Jean-Yves Bouguet, "Camera calibration toolbox for matlab", http://www.vision.caltech.edu/bouguetj/calib_doc/

 

[Hassanpour04] Camera auto-calibration using a sequence of 2D images with small rotations, Reza Hassanpour, Volkan Atalay, Pattern Recognition Letters, Vol.25, Issue 9, 2 July 2004, Pages 989-997

 

[Sturm97] Sturm, P., 1997. Critical motion sequences for monocular selfcalibration and uncalibrated Euclidean reconstruction. In: Conference on Computer Vision and Pattern Recognition. pp. 1100–1105.

 

[2d3-WWW] "2d3 develop and deliver technology built on unrivalled vision science expertise", http://www.2d3.com

 

[Hartley94] Self-calibration from multiple views with a rotating camera, Richard Hartley, ECCV'94, pp.471-478

 

[Agapito01] Agapito, L., Hayman, E., Reid, I.D., 2001. Self calibration of rotating and zooming cameras.  Int. J. Comput. Vision 45(2), 107–127.

 

[Hartley00] R. I. Hartley and I. Zisserman. Multiple view geometry in computer vision. pages 150–152, 2000. 

 

[Lowe04] David G. Lowe. Distinctive image features from scale-invariant keypoints. In International Journal of Computer Vision, pages 91–110, 2004.

 

[Lowe-WWW] David G. Lowe. Demo software: Sift keypoint detector. http://www.cs.ubc.ca/˜lowe/keypoints/.

 

 

Requirements (grades, required courses, etc):

-

 

Expected results:

 

At the end of the work, the students will have enriched their experience in computer vision. In particular are expected to develop and assess:

- geometric models for pan-tilt-zoom cameras;

- algorithms for calibrating cameras having restricted or general motion.

 

 

Place for conducting the work-proposal:

ISR / IST

 

 

More MSc dissertation proposals on Computer and Robot Vision in:

 

http://omni.isr.ist.utl.pt/~jag