Network cameras

 

MSc dissertation proposal 2016/2017

 

 

Estimate Poses of Surveillance Cameras

 

 

-- Information at fenix:

 

Objectives:

 

The main objective of the work consists in estimating the calibration of each camera within a network of cameras while assuming non-overlapping fields of view. The non-overlapping problem is solved by using a mobile robot with the capability of estimating of its pose in a global frame. The robot is equipped with one calibrated camera which we assume that can be oriented in a manner to observe world points also seen by the network of cameras, and therefore transfer the calibration to the networked camera.

 

 

Requirements (grades, required courses, etc):

-

 

 

Place for conducting the work-proposal:

 

ISR / IST

 

 

Observations:

 

The work of the dissertation will count with the collaboration of Albatroz Engineering, R&D Portuguese company, http://www.albatroz.engineering/, for the acquisition of precise datasets necessary for the calibration of the networked cameras.

 

 

-- More information:

 

Motivation:

 

The increasing need of surveillance in public spaces and the recent technological advances on embedded video compression and communications made camera networks ubiquitous. Typical environments include single rooms, complete buildings, streets, highways, tunnels, etc. While the technological advances already allowed such a wide installation of camera networks, the automatically extracting information from the video streams is still an active research area.

 

One of the crucial problems in camera networks is to obtain a correct calibration of each camera in terms of a unique, global, reference frame. This condition is a fundamental feature required for further higher level processing, e.g. people/car tracking, event detection, metrology, i.e. some of the most active research subjects in Computer Vision / Video Surveillance.

 

The problem formulation of this work encompasses the lack of overlapping field of views (FOV) of the cameras while one still wants to obtain a common reference frame for all cameras. In this scenario, exactly geolocating each sensor is difficult without the aid of special equipment, such as moving calibration patterns or GPS, or a priori reference images such as a panorama of the given environment.

 

The main objective of the work consists in estimating the calibration of each camera within a network of cameras while assuming non-overlapping fields of view. Calibration comprises both the intrinsic and extrinsic parameters. The non-overlapping problem is considered to be overcome using a mobile robot with the capability of estimating of its pose in a global frame. The robot is equipped with one calibrated camera which we assume that can be oriented in a manner to observe world points also seen by the network of cameras, and therefore transfer the calibration to the networked camera.

 

 

Detailed description:

 

In order to calibrate each camera of the network we propose using a robot localized with respect to a global coordinate system. The robot is equipped with a colour-depth camera which can be pointed to a field of view similar to the one observed by the network (fixed) camera.

 

Given a number of 3D-points of the environment and their images we can calibrate one camera using standard computer vision methodologies [Hartley00, Leite08]. Using 3D lines in the scene and imaged by the camera, one can obtain more accurate calibrations [Silva12]. Developing automatic correspondences of 3D lines with 2D imaged lines, possibly using automatic SIFT based matching of 3D points and 2D points [Lowe04], is an interesting and practical contribution for the calibration of network cameras.

 

Work-proposal detailed steps:

 

- Detection and correspondence of SIFT features in simulated and real scenarios.

 

- Detection and correspondence of 3D lines of the scene with imaged 2D lines.

 

- Testing the camera calibration methodologies based on the information obtained in the previous steps.

 

 

References:

 

[Ortega14] Calibration of an Outdoor Distributed Camera Network with a 3D Point Cloud, Agustín Ortega, Manuel Silva, Ernesto H. Teniente, Ricardo Ferreira, Alexandre Bernardino, José Gaspar and Juan Andrade-Cetto, Sensors 2014, 14(8), 13708-13729

 

[Silva12] M. Silva, R. Ferreira and J. Gaspar. Camera Calibration using a Color-Depth Camera: Points and Lines Based DLT including Radial Distortion. In WS in Color-Depth Camera Fusion in Robotics, held with IROS 2012.

 

[Leite08] Calibrating a Network of Cameras Based on Visual Odometry, Nuno Leite, Alessio Del Bue, José Gaspar, in Proc. of IV Jornadas de Engenharia Electrónica e Telecomunicações e de Computadores, pp174-179, November 2008, Lisbon, Portugal.

 

[Hartley00] R. I. Hartley and I. Zisserman. Multiple view geometry in computer vision. pages 150–152, 2000. 

 

[Lowe04] David G. Lowe. Distinctive image features from scale-invariant keypoints. In International Journal of Computer Vision, pages 91–110, 2004.

 

 

Expected results:

 

At the end of the work, the students will have enriched their experience in computer vision applied to camera network setups. In particular are expected to develop and assess:

- precise image (feature) registration methodologies

- precise calibration methodologies

 

 

 

More MSc dissertation proposals on Computer and Robot Vision in:

 

http://omni.isr.ist.utl.pt/~jaghttp://omni.isr.ist.utl.pt/~jag

http://omni.isr.ist.utl.pt/~jag

http://omni.isr.ist.utl.pt/~jag