31753     Development of a Visual Inertial Odometry-Based Autonomous Toy Car for Home Surveillance

Supervisor: José Gaspar

 

In the era of smart homes, the integration of robotics and surveillance systems offers unprecedented convenience and security. This thesis proposes the development of an autonomous toy car equipped with Visual Inertial Odometry (VIO) capabilities for indoor surveillance. The car will navigate from a base station to various locations within a home environment, capturing images to provide comprehensive surveillance coverage.

 

The main objective of this thesis is to design, build, and evaluate a toy car capable of autonomously navigating indoor environments using VIO. Specific objectives include:

1. Assembling a toy car with remote control (RC) capabilities and integrating it with a base station.

2. Incorporating an onboard camera and Inertial Measurement Unit (IMU) for VIO.

3. Implementing pan and tilt mechanisms for the onboard camera to capture images from different perspectives.

4. Developing algorithms for VIO to enable autonomous navigation and mapping.

5. Implementing on-spot rotation for capturing 360-degree panoramic views.

6. Evaluating the performance of the developed system in terms of navigation accuracy, image quality, and overall surveillance effectiveness.

 

Methodology:

1. Toy Car Assembly: The toy car will be assembled with RC components and a Raspberry Pi for wireless communication with the base station.

2. Sensor Integration: An onboard camera and IMU (MPU9250 or ICM-20948) will be integrated into the car for VIO.

3. Pan and Tilt Mechanism: Mechanisms will be developed to enable the onboard camera to pan and tilt for capturing images from different angles.

4. Algorithm Development: Algorithms for VIO will be developed using computer vision and sensor fusion techniques to enable autonomous navigation.

5. On-Spot Rotation: Mechanisms will be implemented to enable the toy car to rotate on the spot, capturing 360-degree panoramic views.

6. Evaluation: The developed system will be evaluated in a simulated home environment and real-world scenarios to assess its performance in navigation accuracy, image quality, and surveillance coverage.

 

Expected Outcomes:

1. A fully functional autonomous toy car capable of navigating indoor environments using VIO.

2. Integration of pan and tilt mechanisms for capturing images from various angles.

3. Development of algorithms for VIO-based navigation and mapping.

4. Implementation of on-spot rotation for capturing 360-degree panoramic views.

5. Evaluation of the system`s performance in terms of navigation accuracy and surveillance effectiveness.

 

The proposed thesis aims to contribute to the field of home surveillance robotics by developing an autonomous toy car equipped with VIO capabilities. The integration of pan and tilt mechanisms and on-spot rotation will enhance the car`s surveillance capabilities, providing users with comprehensive monitoring of indoor spaces. The successful completion of this project will pave the way for future advancements in home surveillance systems.

 

Past examples of similar projects can be found at: https://josegaspar999.github.io/sentinel.github.io/

 

Localização: IST Alameda / ISR - torre norte

 


31752     Construction and Calibration of a Portable Point-Depth Camera for Networked Surveillance Systems

Supervisor: José Gaspar

 

The proliferation of camera networks in public spaces underscores the importance of accurate calibration and data extraction. Establishing precise calibration within a global reference frame is critical for various processing tasks like object tracking and event detection. However, achieving this calibration presents challenges due to the lack of overlapping fields of view among cameras. Current approaches often rely on specialized equipment or reference images.

 

To address this challenge, this thesis proposes the construction and calibration of a portable point-depth camera. This camera, with combined pan-tilt functionality and laser point-depth sensing capabilities, aims to measure depths (Z) and XY coordinates of 3D points while providing a unified coordinate system for the entire camera network. Unlike traditional methods, which estimate the motion of the camera, this work focuses on constructing the camera itself and developing methodologies for its calibration within the network.

 

The primary objectives of this thesis are as follows:

1. Constructing a portable point-depth camera equipped with pan-tilt functionality and laser point-depth sensing capabilities.

2. Investigating methodologies for detecting and matching features between the portable point-depth camera and networked cameras.

3. Exploring techniques for detecting and matching 3D lines observed by the portable point-depth camera with their corresponding 2D lines in camera images.

4. Developing and evaluating camera calibration methodologies tailored to the portable setup, utilizing standard computer vision techniques.

 

The proposed methodology involves the following steps:

1. Construction of the portable point-depth camera: Design and assemble a portable point-depth camera with pan-tilt functionality and laser point-depth sensing capabilities.

2. Testing feature detection and correspondence: Experiment with Scale-Invariant Feature Transform (SIFT) for detecting and matching features between the portable point-depth camera and networked cameras.

3. Testing 3D line detection and correspondence: Investigate techniques for detecting and matching 3D lines observed by the portable point-depth camera with their corresponding 2D lines in camera images.

4. Camera calibration: Develop and evaluate camera calibration methodologies based on the information obtained in the previous steps, incorporating additional motion estimation capabilities provided by the portable point-depth camera.

 

By focusing on the construction and calibration of a portable point-depth camera, this thesis aims to address the challenge of non-overlapping fields of view in camera networks. Through experimentation and refinement of methodologies, this work seeks to provide a comprehensive solution for camera calibration in networked surveillance systems.

 

References:

 

[Diniz23] "Integration of Surveillance Cameras based on a Mobile Point-depth Camera", Manuel Diniz, MSc Thesis, MEEC21, IST - 2022/2023.

 

[Gois20] "Calibration of Surveillance Cameras based on an Auxiliary Color-Depth Camera", Diogo Góis, MSc Thesis, Electrical and Computer Engineering, IST - 2019/2020.

 

[Bento19] Fast Setting of Networks of Surveillance Cameras, Mário Bento, MSc Thesis, Electrical and Computer Engineering, IST - 2018/2019.

 

[Silva12] Camera Calibration using a Color-Depth Camera: Points and Lines Based DLT including Radial Distortion. M. Silva, R. Ferreira and J. Gaspar. In WS in Color-Depth Camera Fusion in Robotics, held with IROS 2012.

 

[Leite08] Calibrating a Network of Cameras Based on Visual Odometry, Nuno Leite, Alessio Del Bue, José Gaspar, in Proc. of IV Jornadas de Engenharia Electrónica e Telecomunicações e de Computadores, pp174-179, November 2008, Lisbon, Portugal.

 

[Hartley00] R. I. Hartley and I. Zisserman. Multiple view geometry in computer vision. pages 150–152, 2000.

 

[Lowe04] David G. Lowe. Distinctive image features from scale-invariant keypoints. In International Journal of Computer Vision, pages 91–110, 2004.

 

Localização: IST Alameda / ISR - torre norte

 

 


31722     Portable Point-Depth Camera Motion Estimation

Supervisor: José Gaspar

 

Camera networks have become increasingly prevalent for surveillance purposes, necessitating the accurate calibration of individual cameras within a global reference frame. However, the lack of overlapping fields of view often hinders the estimation of a common reference frame. To address this challenge, we propose the use of a portable point-depth camera that can estimate its own motion and provide a unified coordinate system for the entire camera network. This work aims to develop methodologies for motion estimation and camera calibration using the portable setup.

 

The widespread deployment of camera networks in public spaces, such as buildings, streets, and tunnels, has highlighted the need for accurate calibration and information extraction from video streams. Establishing a correct calibration, based on a global reference frame, is crucial for higher-level processing tasks like object tracking, event detection, and metrology. However, achieving this calibration is challenging due to the lack of overlapping fields of view among cameras. Current research in computer vision focuses on resolving this issue, often relying on specialized equipment or reference images.

 

To address the camera calibration problem in networked environments, we propose a portable setup that can localize itself within a global coordinate system [Bento19, Gois20]. Our setup consists of a calibrated point-depth camera capable of observing the same field of view as the fixed cameras in the network. By leveraging the depth measurement capability, we aim to estimate the motion of the portable setup and establish a unified coordinate system for the entire camera network.

 

The proposed work involves the following steps:

1. Testing the detection and correspondence of SIFT features in both simulated and real scenarios: We will explore the use of Scale-Invariant Feature Transform (SIFT) for detecting and matching features between the point-depth camera and the networked cameras. This step aims to establish reliable correspondences for subsequent calibration processes.

2. Testing the detection and correspondence of 3D lines in the scene with imaged 2D lines: We will investigate the detection and matching of 3D lines observed in the scene by the point-depth camera with their corresponding 2D lines in the camera images. This approach can enhance the accuracy of camera calibration.

3. Testing camera calibration methodologies based on the information obtained in the previous steps: Using the correspondences established in Steps 1 and 2, we will develop and evaluate camera calibration methodologies tailored to the portable setup. These methodologies will utilize standard computer vision techniques, such as those presented in previous works [Hartley00, Leite08, Silva12], while incorporating the additional motion estimation capabilities provided by the portable point-depth camera.

 

By conducting these tests and refining the proposed methodologies, we aim to provide a comprehensive solution for motion estimation and camera calibration in camera networks. This work not only addresses the challenge of non-overlapping fields of view but also contributes to the advancement of computer vision techniques for networked surveillance systems.

 

References:

 

[Gois20] "Calibration of Surveillance Cameras based on an Auxiliary Color-Depth Camera", Diogo Góis, MSc Thesis, Electrical and Computer Engineering, IST - 2019/2020.

 

[Bento19] Fast Setting of Networks of Surveillance Cameras, Mário Bento, MSc Thesis, Electrical and Computer Engineering, IST - 2018/2019. More information.

 

[Silva12] Camera Calibration using a Color-Depth Camera: Points and Lines Based DLT including Radial Distortion. M. Silva, R. Ferreira and J. Gaspar. In WS in Color-Depth Camera Fusion in Robotics, held with IROS 2012.

 

[Leite08] Calibrating a Network of Cameras Based on Visual Odometry, Nuno Leite, Alessio Del Bue, José Gaspar, in Proc. of IV Jornadas de Engenharia Electrónica e Telecomunicações e de Computadores, pp174-179, November 2008, Lisbon, Portugal.

 

[Hartley00] R. I. Hartley and I. Zisserman. Multiple view geometry in computer vision. pages 150–152, 2000.

 

[Lowe04] David G. Lowe. Distinctive image features from scale-invariant keypoints. In International Journal of Computer Vision, pages 91–110, 2004.

 

Localização: IST Alameda / ISR - torre norte

 

 

 


31703     Automated Monitoring Using Pan-Tilt-Zoom Cameras with YOLO Object Detection Enhanced by Depth-Based Zoom Selection

Supervisor: José Gaspar

 

This MSc thesis proposal aims to automate the monitoring of dynamic scenarios using pan-tilt-zoom (PTZ) cameras, specifically focusing on the Axis 5512 and Axis 5624 models. The long-term objective involves applications such as inventory checks in environments like museum rooms, where the PTZ camera systematically surveys the area to ensure the proper placement of objects.

 

PTZ cameras offer high-resolution imaging at multiple points of interest, but mosaic construction at maximum resolution demands significant memory resources. Balancing memory requirements while minimizing computational overhead is a key challenge addressed in this research.

 

The project entails two core components:

(i) Geometric modeling and calibration of PTZ cameras.

(ii) Representation of objects list and their respective locations (pan, tilt, and depth).

 

Given the unique projection center of PTZ cameras, stereo vision is not feasible. Consequently, obtaining depth information relies on the ability to manually adjust focusing settings on the cameras.

 

References:

1. Diogo Góis. "Calibration of Surveillance Cameras based on an Auxiliary Color-Depth Camera." MSc Thesis, Electrical and Computer Engineering, IST, 2019/2020.

2. Luís Alves. "Background Representation and Subtraction for Pan-Tilt-Zoom Cameras." MSc Thesis, Electrical and Computer Engineering, IST, 2017/2018.

3. Tiago Marques. "Cooperating Smart Cameras." MSc Thesis, Electrical and Computer Engineering, IST, 2014/2015.

4. Pedro Silva. "Vision Based Multi-Target Tracking." MSc Thesis, Electrical and Computer Engineering, IST, 2013/2014.

5. Tiago Castanheira. "Multitasking of Smart Cameras." MSc Thesis, Electrical and Computer Engineering, IST, 2012/2013.

6. Diogo Leite. "Target Tracking with Pan-Tilt-Zoom Cameras." MSc Thesis, Electrical and Computer Engineering, IST, 2010/2011.

 

Historical References:

- Frame-Rate Omnidirectional Surveillance & Tracking: http://vast.uccs.edu/~tboult/frame/Boult/index.html

- Video Surveillance and Monitoring: http://www.cs.cmu.edu/~vsam/

 

Localização: IST Alameda / ISR - torre norte

 

 


31695     Utilizing Model Predictive Control with GPS Integration for Autonomous Driving

Supervisors: José Gaspar, João Fernandes

 

Autonomous vehicles stand at the vanguard of transportation innovation, presenting a paradigm shift in mobility. Central to their efficacy is the ability to navigate seamlessly, leveraging robust control methodologies and GPS integration to traverse diverse environments with finesse.

 

This thesis endeavors to pioneer the integration of Model Predictive Control (MPC) into autonomous driving systems, with enhanced GPS integration for precise localization and decision-making. Focusing on the Fiat Seicento Elettra within the VIENA project, we propose a comprehensive exploration spanning simulation-based investigations and real-world validation.

 

The proposed methodology unfolds as follows:

- Establishment of simulation frameworks tailored to MPC-based navigation and motion control, building upon prior MSc research.

- Integration of MPC algorithms with existing navigation systems, harnessing rich localization, pose, and GPS data.

- Rigorous validation and performance assessment of the MPC-driven autonomous driving system on the VIENA car platform.

 

Anticipated outcomes encompass:

- Development of an MPC-driven autonomous driving paradigm, leveraging the predictive power of MPC and precise GPS integration for navigation.

- Thorough evaluation of system performance under varied driving conditions, delineating strengths and areas for refinement.

- Identification of optimization pathways to enhance system efficacy and robustness, utilizing GPS data for improved decision-making.

 

This thesis endeavors to carve a path towards autonomous driving excellence, capitalizing on the predictive prowess of MPC and the precision of GPS data integration to navigate complex environments with finesse. By melding simulation-based inquiry with real-world validation, it seeks not only to elevate current capabilities but also to chart a course towards safer, more efficient autonomous driving systems.

 

Localização: IST Alameda

 

 


31694     Estimating Car Localization, Pose, and GPS Integration

Supervisors: José Gaspar, João Fernandes

 

Autonomous cars represent a transformative technology poised to revolutionize transportation. Amidst this revolution, a pivotal challenge lies in crafting systems adept at perceiving their environment with precision, enabling informed decision-making.

 

This thesis endeavors to pioneer the development of an autonomous car navigation system finely attuned to environmental perception, including GPS integration for enhanced localization and decision-making. Focused on the Fiat Seicento Elettra, as part of the VIENA project, we aim to fuse simulation-based investigations with real-world experimentation to advance navigation and control technologies.

 

The proposed methodology encompasses several key phases:

- Initialization of simulations tailored to navigation and motion control, leveraging prior MSc dissertations.

- Fusion of visual SLAM techniques with GPS data integration for robust localization and navigation.

- Rigorous testing and evaluation of the autonomous car navigation and motion control system utilizing the VIENA car platform.

 

Anticipated outcomes encompass:

- Crafting an autonomous car navigation paradigm anchored in visual SLAM methodologies and GPS integration.

- Rigorous evaluation of navigation system performance, elucidating strengths and weaknesses across varied environments.

- Identification of system limitations and avenues for enhancement to bolster efficacy and GPS utilization.

 

This thesis promises to be a catalyst in the evolution of autonomous car navigation, poised to deliver systems adept at perceiving and reacting to the environment leveraging the added precision of GPS data integration. Through a judicious blend of simulation and real-world experimentation, it seeks to not only advance current capabilities but also illuminate pathways towards future enhancements.

 

Localização: IST Alameda

 

 


31663     FPGA-based VIO accelerator for real-time 3D mapping of underwater caves

Supervisors: José Gaspar, Nuno Neves

 

With the advent of the Internet-of-Things (IoT) and Edge Computing (EC), several scientific areas were given the possibility to achieve on-site computation during scientific missions. A particular case is that of underwater speleology and the survey and topography of flooded cave systems. While traditionally these activities are performed by specialized cave divers by hand, recent advances in video camera technology and EC systems, have led to the exploration of photogrammetry and structure-from-motion (SfM) techniques to accurately map cave systems. However, these techniques are logistically costly since they require the acquisition of thousands of images to accurately model a particular cave section.

 

Alternatively, simultaneous localization and mapping (SLAM) and visual odometry (VO) are prominent techniques fundamentally used in robotics, computer vision, and virtual/augmented reality applications. These systems work by determining the position of an agent (e.g., a video camera) relative to its surroundings, in real-time, while creating a spatial map of the environment and estimating its trajectory through it. Besides visual data obtained from an image sensor, recent algorithms also make use of inertial data from embedded sensors to increase localization accuracy.

 

The nowadays availability of submersible action cameras (e.g., GoPro) that not only record 4K and higher definition video but also deploy arrays of inertial sensors (e.g., gyroscope, magnetometer, accelerometer) makes such devices perfectly suited to acquire the necessary data to model underwater caves during exploration dives. The obtained data can then be used as a baseline for the development of new submersible embedded devices tailored for the task at hand. These will allow on-site teams to pre-visualize accurate point-cloud models of the environment, speeding up the process of collecting data that will later be used to perform 3D reconstruction on a more powerful server machine.

 

Objectives:

 

The aim of this thesis is to research and develop a domain-specific accelerator to enhance real-time point-cloud 3D reconstruction with Visual-Inertial Odometry (VIO) techniques. The aimed work will first entail the study of state-of-the-art algorithms (e.g., UKF, FGO) and algorithms under development by the supervision team, by analyzing their computing requirements and susceptibility for acceleration. The second part of the work envisages the implementation of a dedicated FPGA-based accelerator (in RTL or with HLS), capable of accurately estimating metric scaling of a point-cloud reconstruction from an input video and inertial data streams in real-time.

 

The aimed work will be integrated into an ongoing project to research and develop a cave 3D mapping infrastructure for the exploration of underwater cave systems. The project is running at INESC-ID, in partnership with the Institute for Systems and Robotics (ISR) and the Portuguese Speleology Society (SPE), which will provide real data obtained during the ongoing exploration of national underwater caves.

Requisitos

- Knowledge of C programming language

- Prefered (not mandatory) experience in HDL design (e.g., VHDL/Verilog)

- Minimum average grade: >14 in 1st cycle and >15 in 2nd cycle

- Personal interview, before thesis application

Localização

INESC-ID, Alameda Campus

 

 


31648     Integrating IMU Data with Insta360 Camera for 3D Point Cloud Reconstruction of Underground Caves

Supervisors: José Gaspar, Nuno Neves

 

Underground cave exploration presents unique challenges due to limited visibility and difficult terrain. Traditional mapping methods often fall short in accurately capturing the intricate details of these environments. In recent years, advancements in camera technology, such as the Insta360 camera, coupled with inertial measurement units (IMUs), offer promising opportunities for enhanced 3D reconstruction of underground caves. This proposal aims to leverage these technologies to generate precise 3D point clouds of cave environments.

 

The objectives are:

- Develop a methodology for integrating IMU data with video footage from the Insta360 camera to reconstruct a 3D point cloud of underground caves.

- Investigate calibration techniques to accurately synchronize IMU data with video frames to account for camera motion and orientation.

- Evaluate the effectiveness of using the Insta360 camera, with its wide field of view, in comparison to previous methods using a GoPro camera.

- Assess the impact of rotation speed and translation acceleration from the IMU on the quality and accuracy of 3D reconstructions.

- Validate the proposed methodology through experiments conducted in real underground cave environments.

 

Methodology:

- Pre-processing: Extract IMU data including rotation speed and translation acceleration, and synchronize it with corresponding video frames captured by the Insta360 camera.

- Camera Calibration: Calibrate the Insta360 camera to correct for lens distortion and ensure accurate alignment of images.

- Feature Extraction: Utilize feature detection algorithms to identify distinctive points in the video frames, enabling robust matching across frames.

- Structure from Motion (SfM): Employ SfM techniques to estimate camera poses and reconstruct the 3D geometry of the cave environment.

- Point Cloud Generation: Convert the reconstructed geometry into a dense 3D point cloud in metric units, representing the cave`s spatial structure.

- Evaluation: Quantitatively assess the accuracy and completeness of the generated point clouds through comparison with ground truth data and existing cave maps.

 

Preliminary Work:

- Review the thesis by Arthur Lago on "Underwater Caves Tridimensional Reconstruction" and the associated article published in the ICARSC conference.

- Identify key insights and methodologies from the previous work conducted with a GoPro camera, focusing on registration challenges and reconstruction accuracy.

- Leverage lessons learned from prior research to inform the development of the proposed methodology, particularly in adapting it to the Insta360 camera and IMU integration.

 

Concluding Remarks:

- The proposed research addresses a critical need for accurate 3D mapping of underground cave environments, facilitating exploration, conservation, and scientific study.

- By leveraging advancements in consumer-grade camera technology and IMU sensors, the proposed methodology offers a cost-effective and scalable solution for cave mapping.

- The integration of IMU data with camera imagery enhances the robustness and accuracy of 3D reconstructions, overcoming limitations associated with visual-only approaches.

 

This proposal outlines a comprehensive approach for integrating IMU data with video footage from the Insta360 camera to reconstruct precise 3D point clouds of underground caves. By building upon existing research and leveraging cutting-edge technologies, this work aims to advance the field of cave mapping and contribute to our understanding of these unique and often inaccessible environments.

 

Localização: IST Alameda / ISR - torre norte / INESC-Id

 

 


31479     Intrusion Detection with Wireless Autonomous Security System Using Camera and IMU-Equipped Toy Car

Supervisor: José Gaspar

 

The proposed project aims to develop an autonomous security system that can detect an intrusion in a room and identify the possible reasons for it. The system will be implemented using a toy car equipped with a camera and an inertial measurement unit (IMU). The project will involve the design, construction, and programming of the toy car to perform the desired tasks, with the processing of images and sensor data being done wirelessly on a remote computer.

 

Objectives:

• Develop a toy car that can move autonomously in a room.

• Equip the toy car with a camera and IMU to capture images and provide navigation data.

• Program the toy car to transmit captured images and navigation data wirelessly to a remote computer for processing.

• Process the data received wirelessly on the remote computer to identify possible reasons for the intrusion, such as sudden window or door opening.

• Demonstrate the effectiveness of the system in identifying the cause of the intrusion.

 

The proposed system will consist of a toy car, a camera, an IMU, and a microcontroller. The toy car will be designed and constructed to move autonomously in a room. The camera will be used to capture images of the room, while the IMU will provide navigation data that can be used to detect the position and orientation of the toy car. The microcontroller will be used to process the sensor data and control the movement of the toy car.

 

The system will be programmed to transmit the images and navigation data captured by the camera and IMU wirelessly to a remote computer for processing. The remote computer will be responsible for processing the data received wirelessly and identifying possible reasons for the intrusion.

 

The proposed system is expected to effectively identify the possible causes of an intrusion in a room. The system will be able to move autonomously and capture images and navigation data of the room. The system will be able to transmit the captured data wirelessly to a remote computer for processing. The remote computer will be able to analyze the captured data and identify possible reasons for the intrusion, such as sudden window or door opening. The system will also be able to verify the possible cause of the intrusion by moving towards the detected changes in the room`s environment.

 

The proposed system has the potential to be used in various applications, such as in homes, offices, and warehouses, where wireless data transmission can help reduce the cost and complexity of the system.

 

Localização: IST Alameda / ISR - torre norte

 

 

 


31478     Scene Content Monitoring with Pan-Tilt-Zoom Cameras

Supervisor: José Gaspar

 

Given one pan-tilt-zoom camera the long-term objective consists in automating the monitoring of changes in a scenario. An example application is inventory check in a museum room, where the pan-tilt-zoom camera at the end of the day browses the scenario to confirm all museum pieces are at their desired locations.

 

Pan-tilt-zoom cameras provide very high resolution at widely separated points of interest. Mosaicing at the highest resolution requires large amounts of memory. Minimizing the memory requirements, without exaggerating the computational requirements, is a central objective of this thesis.

 

This project is based in the research and development of two fundamental components:

(i) pan-tilt-zoom camera geometric modeling and calibrating

(ii) integrating changes in the background model of the scenario

(iii) detecting modifications of the scenario

 

 

References:

 

"Calibration of Surveillance Cameras based on an Auxiliary Color-Depth Camera", Diogo Góis, MSc Thesis, Electrical and Computer Engineering, IST - 2019/2020.

 

"Background Representation and Subtraction for Pan-Tilt-Zoom Cameras", Luís Alves, MSc Thesis, Electrical and Computer Engineering, IST - 2017/2018.

 

"Cooperating Smart Cameras", Tiago Marques, MSc Thesis, Electrical and Computer Engineering, IST - 2014/2015.

 

"Vision Based Multi-Target Tracking", Pedro Silva, MSc Thesis, Electrical and Computer Engineering, IST - 2013/2014.

 

"Multitasking of Smart Cameras", Tiago Castanheira, MSc Thesis, Electrical and Computer Engineering, IST - 2012/2013.

 

"Target Tracking with Pan-Tilt-Zoom Cameras", Diogo Leite, MSc Thesis, Electrical and Computer Engineering, IST - 2010/2011.

 

 

Historical references:

 

Frame-Rate Omnidirectional Surveillance & Tracking

http://vast.uccs.edu/~tboult/frame/Boult/index.html

 

Video Surveillance and Monitoring

http://www.cs.cmu.edu/~vsam/

Requisitos

-

Localização ISR / IST