This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
publications [2016/10/21 15:54] aamir |
publications [2016/10/21 16:15] (current) aamir |
||
---|---|---|---|
Line 4: | Line 4: | ||
1. **A. Ahmad and H. Bülthoff**, //Moving-horizon nonlinear least squares-based multirobot cooperative perception//, **Robotics and Autonomous Systems (RAS) Journal**, Volume 83 Pages 275–286, August 2016. | 1. **A. Ahmad and H. Bülthoff**, //Moving-horizon nonlinear least squares-based multirobot cooperative perception//, **Robotics and Autonomous Systems (RAS) Journal**, Volume 83 Pages 275–286, August 2016. | ||
- | * A highly scalable method for multirobot cooperative perception was developed. The core of this method is based on pose-graph optimization, where the state estimation problem is considered as a weakly connected graph of nodes (representing poses to be estimated) and edges (measurements that constraint those states). Such a formulation is highly preferred because it provides near-optimal estimates under assumptions of good initialization conditions. However, it also generally leads to a computationally heavy optimization problem that is neither feasible to run in realtime nor is scalable. Therefore, in order to achieve run-time feasibility and scalability in an optimization-based scheme, we studied a moving horizon-based estimation approach integrated with a nonlinear least squares based minimization (the latter is the core optimization scheme). We found that for such an integrated estimator to be asymptotically stable and convergent the most important criteria lies in the proper design of the arrival cost function. Subsequently, we developed an optimization-based estimator for multirobot cooperative perception that not only runs in real-time and is scalable but is also asymptotically stable and convergent. We implemented and rigorously tested this method on a publicly available rich dataset from a team of four soccer robots in an environment where they have to localize and track a standard soccer ball as well as another dataset where a team of robots track each other at a low frequency. Through an extensive set of real robot experiments, we demonstrate the robustness of our method as well as the optimality of the arrival cost function. The experiments included comparisons of our method with i) an extended Kalman filter-based online-estimator and ii) an offline-estimator based on full-trajectory nonlinear least squares. | + | * ++ Click to read description of work | A highly scalable method for multirobot cooperative perception was developed. The core of this method is based on pose-graph optimization, where the state estimation problem is considered as a weakly connected graph of nodes (representing poses to be estimated) and edges (measurements that constraint those states). Such a formulation is highly preferred because it provides near-optimal estimates under assumptions of good initialization conditions. However, it also generally leads to a computationally heavy optimization problem that is neither feasible to run in realtime nor is scalable. Therefore, in order to achieve run-time feasibility and scalability in an optimization-based scheme, we studied a moving horizon-based estimation approach integrated with a nonlinear least squares based minimization (the latter is the core optimization scheme). We found that for such an integrated estimator to be asymptotically stable and convergent the most important criteria lies in the proper design of the arrival cost function. Subsequently, we developed an optimization-based estimator for multirobot cooperative perception that not only runs in real-time and is scalable but is also asymptotically stable and convergent. We implemented and rigorously tested this method on a publicly available rich dataset from a team of four soccer robots in an environment where they have to localize and track a standard soccer ball as well as another dataset where a team of robots track each other at a low frequency. Through an extensive set of real robot experiments, we demonstrate the robustness of our method as well as the optimality of the arrival cost function. The experiments included comparisons of our method with i) an extended Kalman filter-based online-estimator and ii) an offline-estimator based on full-trajectory nonlinear least squares.++ |
* This works fulfils the first core objective of the project as outlined previously. | * This works fulfils the first core objective of the project as outlined previously. | ||
Line 13: | Line 13: | ||
2. **A. Ahmad and H. Bülthoff**, //Human-Multirobot Interaction in Cooperative Perception-based Search Missions//, submitted to the **IEEE conference on robotics and automation (ICRA 2017)**. | 2. **A. Ahmad and H. Bülthoff**, //Human-Multirobot Interaction in Cooperative Perception-based Search Missions//, submitted to the **IEEE conference on robotics and automation (ICRA 2017)**. | ||
- | * A novel and systematic psychophysical experiment for human-multirobot interaction, involving simulated search mission, was designed to study the role of system and human factors on the collaborative task efficiency in an H-MRI search mission scenario within the context of cooperative perception. In this scenario, a human operator could control a team of robots performing cooperative perception in order to search for survivors in a disaster-like arena. While the system factors included the type of high-level control and the number of robots in the team, the human factors included skills and experience with video games, gender and age. Based on the results from 35 human participants, the first main finding of our study is that the collaborative task is performed much better in a scenario where the robots are fully autonomous in performing exploration of the world while the human operators are tasked only with searching for survivors in the explored regions of the world. When human operators are additionally tasked with controlling the robots, they perform significantly poorer unless they possess good video gaming experience. With such experience operators quickly adapt to the high-level control mechanism and apply good exploration strategies. Gaming experience could be interpreted as a proxy for operator training but the latter would need further experimental evaluation. | + | * ++ Click to read description of work | A novel and systematic psychophysical experiment for human-multirobot interaction, involving simulated search mission, was designed to study the role of system and human factors on the collaborative task efficiency in an H-MRI search mission scenario within the context of cooperative perception. In this scenario, a human operator could control a team of robots performing cooperative perception in order to search for survivors in a disaster-like arena. While the system factors included the type of high-level control and the number of robots in the team, the human factors included skills and experience with video games, gender and age. Based on the results from 35 human participants, the first main finding of our study is that the collaborative task is performed much better in a scenario where the robots are fully autonomous in performing exploration of the world while the human operators are tasked only with searching for survivors in the explored regions of the world. When human operators are additionally tasked with controlling the robots, they perform significantly poorer unless they possess good video gaming experience. With such experience operators quickly adapt to the high-level control mechanism and apply good exploration strategies. Gaming experience could be interpreted as a proxy for operator training but the latter would need further experimental evaluation.++ |
- | * Our second main finding is that there is only a slight benefit in increasing the number of robots in the team performing the collaborative task under the assumption that a single human operator is responsible for the actual survivor search and classification task. In fact, increasing the number of robots in a scenario where the robots together maintain certain optimal formations becomes detrimental to exploration as it rapidly increases the computational overhead. | + | * ++ Click to read about the result (1) | Our second main finding is that there is only a slight benefit in increasing the number of robots in the team performing the collaborative task under the assumption that a single human operator is responsible for the actual survivor search and classification task. In fact, increasing the number of robots in a scenario where the robots together maintain certain optimal formations becomes detrimental to exploration as it rapidly increases the computational overhead.++ |
- | * We also found that for a small number of robots there is no significant difference in the collaborative task efficiency between a fully controlled multirobot team (where each robot is individually commanded) or a formation controlled team (where the group is commanded as a whole). | + | * ++ Click to read about the result (2) | We also found that for a small number of robots there is no significant difference in the collaborative task efficiency between a fully controlled multirobot team (where each robot is individually commanded) or a formation controlled team (where the group is commanded as a whole).++ |
- | * Based on these findings we could infer certain guidelines for developing an effective architecture for human-multirobot search missions. Clearly, cooperative perception speeds up the exploration process. However, to make the best use of the explored world regions, it might be beneficial to split a large number of robots into several small-size teams. A small set of such teams should be assigned to a separate human operator who in turn could be responsible for the search and classification task only in a specific region. Our future work includes developing such an architecture for search and rescue missions. We also intend to study how operator training could improve the collaborative task efficiency. | + | * ++ Click to read about the result (3) | Based on these findings we could infer certain guidelines for developing an effective architecture for human-multirobot search missions. Clearly, cooperative perception speeds up the exploration process. However, to make the best use of the explored world regions, it might be beneficial to split a large number of robots into several small-size teams. A small set of such teams should be assigned to a separate human operator who in turn could be responsible for the search and classification task only in a specific region. Our future work includes developing such an architecture for search and rescue missions. We also intend to study how operator training could improve the collaborative task efficiency.++ |
* This works fulfills the second core objective of the project as outlined previously. | * This works fulfills the second core objective of the project as outlined previously. | ||
Line 29: | Line 29: | ||
4. **A. Ahmad and P. Lima**, //Dataset suite for benchmarking perception in robotics//, in **IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2015)**, Workshop: Open forum on evaluation of results, replication of experiments and benchmarking in robotics research, Hamburg, Germany, September 2015. | 4. **A. Ahmad and P. Lima**, //Dataset suite for benchmarking perception in robotics//, in **IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2015)**, Workshop: Open forum on evaluation of results, replication of experiments and benchmarking in robotics research, Hamburg, Germany, September 2015. | ||
- | * We created a multirobot perception dataset suite. This suite consists of robot-centric sensor data, external ground truth information and several pre-processed information on top of the raw sensor data. This dataset suite will be instrumental in further developing and testing extended versions of cooperative perception functionalities. Along with the dataset, the suite also consists of accessing and benchmarking software tools that were developed by the research fellow. The dataset suite is made publicly available for download with very detailed usage instructions [3,4,5]. Overall, this is not only a (unforeseen) significant contribution made during this project's execution but is also a transparent and easy platform for other researchers in our field to be able to reproduce, compare and benchmark our method with theirs. This contribution was disseminated to the research community in a workshop at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2015, Hamburg, Germany. | + | * ++ Click to read description of work | We created a multirobot perception dataset suite. This suite consists of robot-centric sensor data, external ground truth information and several pre-processed information on top of the raw sensor data. This dataset suite will be instrumental in further developing and testing extended versions of cooperative perception functionalities. Along with the dataset, the suite also consists of accessing and benchmarking software tools that were developed by the research fellow. The dataset suite is made publicly available for download with very detailed usage instructions [3,4,5]. Overall, this is not only a (unforeseen) significant contribution made during this project's execution but is also a transparent and easy platform for other researchers in our field to be able to reproduce, compare and benchmark our method with theirs. This contribution was disseminated to the research community in a workshop at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2015, Hamburg, Germany.++ |