|
Overview |
The primary objective of this project is to develop
fundamental capabilities that enable multiple, distributed,
heterogeneous robots to coordinate tasks that cannot be accomplished
by the robots individually. The basic concept is to enable
individual robots to act independently, while still allowing for
tight, precise coordination when necessary. Individual robots will
be highly autonomous, yet will be able to synchronize their
behaviors, negotiate with one another to perform tasks, and
"advertise" their capabilities.
The proposed architecture supports the ability of robots to react
to changing and/or previously unknown conditions by replanning and
negotiating with one another if the new plans conflict with
previously planned-upon cooperative behaviors. The resulting
capability will make it possible for teams of robots to undertake
complex coordinated tasks, such as assembling large structures, that
are beyond the capabilities of any one of the robots individually.
Emphasis will be placed on the reliability of the worksystem to
monitor and deal with unexpected situations, and flexibility to
dynamically reconfigure as situations change and/or new robots join
the team.
This project is a multi-center collaboration with participation
from Johnson Space Center (JSC)/TRACLabs and the National Institute
of Standards and Technology (NIST). CMU will focus on algorithms for
distributed task execution, task negotiation, planning under
uncertainty, and algorithms specific to the domain of multi-robot
assembly and construction.
|
Challenges |
The main technical challenge of the project is to
develop an architectural framework that permits a high degree of
autonomy for each individual robot, while providing a coordination
structure that enables the group to act as a unified team. Our
approach is to extend current state-of-the-art hierarchical, layered
robot architectures being developed at CMU (TCA), TRACLabs (3T) and
NIST (RCS) to support distributed, coordinated operations. Our
proposed architecture is highly compatible with these single-agent
robot architectures, and will extend them to enable multiple robots
to handle complex tasks that require a fair degree of coordination
and autonomy. Research issues include:
The architectural
approach will be validated by an increasingly complex series of
demonstrations in the area of multi-robot assembly with a
heterogeneous team of robots. The "team" will include the NIST
Robocrane, a roving eye, and a mobile manipulator.
|
Visual Servoing |
|
Since assembly operations require high precision, we
use visual servoing to perform mating operations. Proof of principle
servoing has been demonstrated with a desktop mounted manipulator
and a roving eye (stereo cameras mounted on a mobile robot). The
visual servoing determines the relative 6-DOF pose between two
fiducial markers and determines a correction to reduce the
difference between current relative pose and the desired one.
More
information about XVision, the image library that we
use
|
The Robots |
|
Gross Motion Robocrane
is a large gantry type, inverted Stewart platform capable of
manipulating large loads. We use Robocrane for gross manipulation.
Visit NIST for
more information
|
|
Mobile Manipulator We
use Bullwinkle, a mobile robot built by RWII, to host a small five
degree-of-freedom arm that is capable of fine manipulation. The
robot is a four wheel, skid steered machine equipped with onboard
computing and inertial sensing. Visit RWII for more
information
|
The Simulator |
|
We are developing a simulator that will allow us to
test robot planning and architectural issues. It models robot
mechanisms and sensors such as stereo vision. The simulator will
enable testing of algorithms in repeatable configuration and in
configurations that are not easy to create physically. The system
that drives our robot testbed can be attached to the simulator
through an identical interface.
| |