omni_images

MSc dissertation proposal 2016/2017

 

Improved Object Recognition with Omnidirectional Cameras

 

-- Information at fenix:

 

Objectives:

 

Omnidirectional cameras have a wide field-of-view. This field-of-view is much greater than that of a conventional camera. As the name indicates, omnidirectional cameras refer to cameras with 360º field-of-view like the 360Fly, Ricoh Theta or Bublcam. Nonetheless, we also call omnidirectional cameras to cameras with field-of-view near or greater than 180º. This awareness of the camera surroundings make these cameras attractive for robotics and surveillance applications.

 

The objectives of this work are:

1. Acquire a catalogued dataset of omnidirectional images.

2. Develop an algorithm to recognize objects in omnidirectional images.

3. Evaluate the method accuracy by comparing the results with existing algorithms for object recognition in omnidirectional images.

 

Requirements:

--

 

Place for conducting the work-proposal:

ISR / IST

 

Observations:

 

The wide field-of-view of these cameras introduces large distortions on the images. For a non-trained eye these images are difficult to analyze and non-natural. Furthermore, with the increasing number of cameras, an automated way of recognizing objects and events on a scene is useful to aid a human identifying a particular situation of interest.

 

The majority of the object recognition algorithms are based or inspired on conventional camera images. Thus, to apply object recognition in omnidirectional cameras, an additional step is required to unwarp the omnidirectional image to a conventional image. Nonetheless, the unwarping requires additional computations and introduces noise on the image. Furthermore, the accuracy assessment of object recognition is based on rectangular images that are not suitable for omnidirectional images.

 

Therefore, in this work we want to develop a recognition method that is capable of recognizing objects directly on omnidirectional images by considering the distortion introduced by the omnidirectional camera in the features. This method should be applied to a real dataset and compared with state of the art methods for object recognition on omnidirectional images.

 

 

-- More information:

 

Omnidirectional cameras are ubiquitous in robotic and surveillance applications due to their omniawareness capability. Nonetheless, strategies that use directly the highly distorted images are scarce.

 

Many of the alternatives require unwarping the omnidirectional images to perspective images and then applying conventional algorithms. However, image unwarping requires a large amount of computation and introduces noise in image [Daniilidis2002]. Therefore, it is necessary to adapt the current algorithms to work directly with omnidirectional images. This can be accomplished by introducing the variable geometry into these algorithms.

 

In this work we intend to develop an object detection algorithm that takes into account the geometry of omnidirectional cameras and that is also resilient to deformations and occlusion. A comparison with existing algorithms for omnidirectional images should also be considered.

 

Objectives

The objectives of this work are: (i) acquire a catalogued dataset of omnidirectional images, and (ii) detect and recognize objects in omnidirectional images.

 

 

Detailed description:

 

The process of detecting objects by matching a pair of images consists of three main steps:

1.         Identify points of interest in both images. The localization of the points can be found with algorithms like the Harris corner detector [Harris1988]. Add some image data to those points in order to make them discernible and comparable with other points.

2.         The points of interest are mutually compared and the most similar ones are paired into correspondences. Here, we can have points that are matched incorrectly (outliers) and matched correctly (inliers).

3.         Final processing in order to minimize the number of outliers in the correspondences.

 

Normally, for the last two steps there are used rectangular windows for cameras with perspective projection but these are not appropriate for omnidirectional cameras [Svoboda2001]. Alternatives require image unwarping which remove distortion from the omnidirectional image but requires a large amount of computation and introduces noise in image [Daniilidis2002].

 

Svoboda et al. [Svoboda2001] developed a way of computing an adequate boundary for image matching in omnidirectional images. They defined the neighborhood on the surface of a mirror and then projected a small patch on the omnidirectional image plane. Similar strategies have been applied for Ieng et al. [Ieng2003] that propose a patch of different angular aperture for the varying resolution of the omnidirectional camera. Other approaches [Demonceaux2006] are derived from the equivalent sphere theorem proposed by Geyer and Daniilidis [Geyer2001].

 

Tang et al. [Tang2015] used this strategy to develop a pedestrian tracking algorithm in omnidirectional images that also uses a part-based methodology to accommodate for deformation and occlusion of pedestrians.

 

This work will focus on adapting algorithms for detecting objects in traditional cameras to omnidirectional cameras. In this work we intend to:

- Acquire a catalogued dataset of omnidirectional images.

- Develop an algorithm to detect and recognize objects in omnidirectional images.

- Compare results with existing algorithms that use omnidirectional images (directly or unwarped).

 

 

References:

 

[Harris1988] Harris, Chris, and Mike Stephens. "A combined corner and edge detector." Alvey vision conference. Vol. 15. 1988.

 

[Svoboda2001] Svoboda, Tomáš, and Tomáš Pajdla. "Matching in catadioptric images with appropriate windows, and outliers removal." Computer Analysis of Images and Patterns. Springer Berlin Heidelberg, 2001.

 

[Daniilidis2002] Daniilidis, Kostas, Ameesh Makadia, and Thomas Bulow. "Image processing in catadioptric planes: Spatiotemporal derivatives and optical flow computation." Omnidirectional Vision, 2002. Proceedings. Third Workshop on. IEEE, 2002.

 

[Ieng2003] Ieng, Sio-hoï, Ryad Benosman, and Jean Devars. "An efficient dynamic multi-angular feature points matcher for catadioptric views." Computer Vision and Pattern Recognition Workshop, 2003. CVPRW'03. Conference on. Vol. 7. IEEE, 2003.

 

[Demonceaux2006] Demonceaux, Cédric, and Pascal Vasseur. "Markov random fields for catadioptric image processing." Pattern Recognition Letters 27.16 (2006): 1957-1967.

 

[Geyer2001] Geyer, Christopher, and Kostas Daniilidis. "Catadioptric projective geometry." International Journal of Computer Vision 45.3 (2001): 223-243.

 

[Tang2015] Tang, Yazhe, et al. "Parameterized Distortion-Invariant Feature for Robust Tracking in Omnidirectional Vision."

 

 

Requirements (grades, required courses, etc):

-

 

Expected results:

 

At the end of the work, the students will have enriched their experience in computer vision. In particular are expected to develop and assess the following topics:

- Dataset acquisition and cataloging.

- Omnidirectional camera calibration.

- Object detection and recognition from omnidirectional images.

 

 

More MSc dissertation proposals on Computer and Robot Vision in:

 

http://omni.isr.ist.utl.pt/~jag