Network cameras

 

MSc dissertation proposal 2010/2011

 

Network Cameras Calibration

 

 

Introduction:

 

The increasing need of surveillance in public spaces and the recent technological advances on embedded video compression and communications made camera networks ubiquitous. Typical environments include single rooms, complete buildings, streets, highways, tunnels, etc. While the technological advances already allowed such a wide installation of camera networks, the automatically extracting information from the video streams is still an active research area. One of the crucial problems in camera networks is to obtain a correct calibration of each camera in terms of a unique, global reference frame. This condition is a fundamental feature required for further higher level processing (i.e. people/car tracking, event detection, metrology) and nowadays one of the most active research subjects in Computer Vision. The problematics generally arise from the lack of overlapping field of views (FOV) of the camera which does not allow the estimation of a common reference frame for each camera. In such scenario, exactly geolocating each sensor is extremely complex without the aid of special equipment (moving calibration patterns or GPS) or a priori reference images such as a panorama of the given environment.

 

Objectives:

 

The main objective of the work consists in estimating the calibration of a network of cameras, possibly with non-overlapping fields of view. Calibration comprises both the intrinsic and extrinsic parameters of the cameras. The non-overlapping problem is expected to be overcome using a mobile robot with the capability of estimating of its pose in a global frame. The robot is equipped with one calibrated camera which we assume that can be oriented in a manner to observe world points also seen by the network of cameras.

 

Detailed description:

 

 

In order to calibrate the camera network we propose an approach based on the visual reconstruction of some points of the scenario. These points are expressed in a unique, global (also termed world) coordinate frame provided by self-localization information defined by the starting location of a mobile robot. Given a number of reconstructed 3D-points of the environment and their images we can calibrate the network of cameras using standard computer vision methodologies [Hartley00]. The reconstruction of 3D points comprises two main steps, namely matching image points and computing their locations. We do the matching based on Scale Invariant Feature Transform (SIFT) features, state of the art features well known to provide a very robust matching procedure [LoweWWW, Lowe04], and the computation of the 3D locations is based on Visual Simultaneous Localization and Map Building (vSLAM) [Goncalves05, Karlsson05].

 

Work-proposal detailed steps:

 

(i) Studying and testing vSLAM, by doing the reconstruction of points and trajectories in some simulated or real scenarios, based on various software packages freely available in the Internet.

 

(ii) Testing the detection and correspondence of SIFT in simulated or real scenarios. Studying some methodologies to improve the precision of the registration of the SIFT features.

 

(iii) Testing the camera calibration methodologies based on the information obtained in the previous steps.

 

References:

 

[Leite08] Calibrating a Network of Cameras Based on Visual Odometry, Nuno Leite, Alessio Del Bue, José Gaspar, in Proc. of IV Jornadas de Engenharia Electrónica e Telecomunicações e de Computadores, pp174-179, November 2008, Lisbon, Portugal.

 

[Goncalves05] L. Goncalves, E. Di Bernardo, D. Benson, M. Svedman, J. Ostrowski, N. Karlsson, and P. Pirjanian. A visual front-end for simultaneous localization and mapping. In International Conference on Robotics and Automation, 2005. 

 

[Karlsson05] N. Karlsson, E. Di Bernardo, J. Ostrowski, L. Goncalves, P. Pirjanian, and M. Munich. The vslam algorithm for robust localization and mapping. In International Conference on Robotics and Automation, 2005. 

 

[Hartley00] R. I. Hartley and I. Zisserman. Multiple view geometry in computer vision. pages 150–152, 2000. 

 

[LoweWWW] David G. Lowe. Demo software: Sift keypoint detector. http://www.cs.ubc.ca/˜lowe/keypoints/.

 

[Lowe04] David G. Lowe. Distinctive image features from scale-invariant keypoints. In International Journal of Computer Vision, pages 91–110, 2004.

 

 

Requirements (grades, required courses, etc):

-

 

Expected results:

 

At the end of the work, the students will have enriched their experience in computer vision applied to camera network setups. In particular are expected to develop and assess:

- precise image (feature) registration methodologies

- precise calibration methodologies

 

 

Place for conducting the work-proposal:

ISR / IST

 

 

More MSc dissertation proposals on Computer and Robot Vision in:

 

http://omni.isr.ist.utl.pt/~jag