Network cameras
MSc dissertation proposal 2014/2015
In-Situ
Setting of Surveillance-Cameras
Introduction:
The
increasing need of surveillance in public spaces and the recent technological
advances on embedded video compression and communications made camera networks
ubiquitous. Typical environments include single rooms, complete buildings,
streets, highways, tunnels, etc. While the technological advances already
allowed such a wide installation of camera networks, the automatically
extracting information from the video streams is still an active research area.
One
of the crucial problems in camera networks is to obtain a correct calibration
of each camera in terms of a unique, global, reference frame. This condition is a
fundamental feature required for further higher level processing (i.e.
people/car tracking, event detection, metrology) and
nowadays one of the most active research subjects in Computer Vision. The problematics generally arise from the lack of overlapping field of views
(FOV) of the camera which does not allow the estimation of a common reference
frame for each camera. In such scenario, exactly geolocating
each sensor is difficult without the aid of special equipment (moving calibration
patterns or GPS) or a priori reference images such as a panorama of the given
environment.
Objectives:
The
main objective of the work consists in estimating the calibration of each
camera within a network of cameras, possible with non-overlapping fields of
view. Calibration comprises both the intrinsic and extrinsic parameters. The
non-overlapping problem is considered to be overcome using a mobile robot with
the capability of estimating of its pose in a global frame. The robot is
equipped with one calibrated camera which we assume that can be oriented in a
manner to observe world points also seen by the network of cameras, and
therefore transfer
the calibration to the networked camera.
Detailed
description:
In
order to calibrate each camera of the network we propose using a robot
localized with respect to a global coordinate system. The robot is equipped
with a colour-depth camera which can be pointed to a field of view similar to
the one observed by the network (fixed) camera.
Given
a number of 3D-points of the environment and their images we can calibrate one
camera using standard computer vision methodologies [Hartley00, Leite08]. Using
3D lines in the scene and imaged by the camera, one can obtain more accurate
calibrations [Silva12]. Developing automatic correspondences of 3D lines with 2D imaged lines,
possibly using automatic SIFT based matching of 3D points and 2D points
[Lowe04], is an interesting and practical contribution for the calibration of
network cameras.
Work-proposal
detailed steps:
-
Testing the detection and correspondence of SIFT features in simulated and real
scenarios.
-
Testing the detection and correspondence of 3D lines of the scene with imaged
2D lines.
-
Testing the camera calibration methodologies based on the information obtained
in the previous steps.
References:
[Silva12] M. Silva, R. Ferreira and J. Gaspar. Camera
Calibration using a Color-Depth Camera: Points and Lines
Based DLT including Radial Distortion. In WS in Color-Depth Camera Fusion in Robotics, held with IROS 2012.
[Leite08]
Calibrating a Network of Cameras Based
on Visual Odometry, Nuno Leite, Alessio
Del Bue, José Gaspar, in Proc. of IV Jornadas de Engenharia Electrónica e Telecomunicações e
de Computadores, pp174-179, November 2008, Lisbon,
Portugal.
[Hartley00]
R. I. Hartley and
[Lowe04]
David G. Lowe. Distinctive image features from scale-invariant keypoints. In International Journal of Computer Vision,
pages 91–110, 2004.
Requirements
(grades, required courses, etc):
-
Expected
results:
At
the end of the work, the students will have enriched their experience in
computer vision applied to camera network setups. In particular are expected to
develop and assess:
-
precise image (feature) registration methodologies
-
precise calibration methodologies
Place
for conducting the work-proposal:
ISR
/ IST
More MSc
dissertation proposals on Computer and Robot Vision in:
http://omni.isr.ist.utl.pt/~jag