Swarm robotics,
a subset of multiagent systems research, focuses on very
large teams of small robots working together toward a
common goal. The robots could eventually be engineered to
nanoscale size—invisible to the human eye—and number in the
hundreds, thousands, or tens of thousands per group. David Payton, a
research scientist at HRL Laboratories, expects these robot
groups to become more effective as their numbers increase.
“Generally, other methods of multiagent teaming
have a much tighter command hierarchy and tend not to easily
scale to large numbers of units because of this,” says Payton.
Swarm robots, however, are generally autonomous,
cooperating and coordinating among themselves. They have the
potential to quickly enable applications that so far have been the
stuff of only science fiction.
Swarm
communication,
cooperation
Payton works with pheromone robots, which he
and his colleagues modeled after the chemical insects use to
communicate. Ants, for example, leave a pheromone trail that
attracts other ants. When one ant discovers a food source, it
backtracks along its own trail, doubling the strength of the
chemical marker and attracting other ants that further strengthen it
until the food source runs out.
Payton’s robots use a virtual pheromone.
Instead of spreading a chemical landmark, pheromone robots use
communication to “spread information and create gradients [like
slopes, or distance vectoring] in the information space,” Payton
says. By using these virtual pheromones, the robots can send and
receive directional communications to each other and so sense clear
pathways and the number of network hops between them.
Each robot has arrays of directional transmitters and
receivers. It can send infrared signals in certain directions and
detect where signals are coming from. The communications themselves
are simple message packets.
Infrared requires line of sight to send and receive.
According to Payton, this ensures that two robots successfully
communicating with each other have an open space between them. In
spatial activities like those of distributed swarms, you must know
where a signal is coming from and how far away the sending robot is.
Payton’s pheromone robots tag arriving messages with their signal
intensity, which degrades over distance. Because pheromone robots
constitute a network, they can tell each other how many network hops
they are away from each other.
Pheromone swarm applications
Search and rescue missions are one military, public,
and private-sector application area for pheromone swarms. When
dispersed into a damaged building, the robots would spread out and
find injured victims by detecting sound, perhaps breathing. “When a
robot is near enough to [make a detection],” says Payton, “it could
send out a signal and create a gradient throughout the robot
network.” A human rescuer could then follow the gradient mapping to
the injured party.
Pheromone robots could detect an enemy target in a
building as well. A soldier could enter a building, according to
Payton, open his or her jar of nanosized robots, and dump them out
so that they disperse inside the building. If one of them detects
human movement or sound by means of a sensor, it can transmit a
signal through all the nanobots until it gets back to the soldier.
According to Payton, the soldier would see highlighted nanobots in
an augmented-reality helmet display that would point the way to the
source.
Formation control is another application area. Several
nanobots could join to form columns or an array. “You might want to
use them as a distributed antenna, for example,” says Payton.
Building security is another application. For this,
you need robots with different kinds of sensors, and many with none
at all, to make up the communications infrastructure. “Let’s say
you’re trying to detect intruders,” says Payton. A motion-sensing
robot isn’t sufficient for detecting intrusion. “What you really
want is to know that you have motion and sound together in the same
place,” he says.
By setting up a pheromone gradient, you can attract
acoustic-sensing robots to the motion-sensing robot that detected
the motion. Then you can tell whether you have sound and motion in
the same place. “The motion-sensing robot sends out a signal that
propagates through the network of robots and establishes this
network hop-count gradient,” says Payton. “The acoustic-sensing
robots detect that gradient and are attracted to its source.”
However, the system requires a decision-making
mechanism that automatically sends one acoustic-sensing robot and
not all of them; otherwise you’ve weakened your security elsewhere
in the building. “Every acoustic-sensing robot that wants to go
where the motion-sensing guy is transmits another pheromone,” says
Payton. Based on the gradient, the pheromone encodes the information
about how close the acoustic-sensing robot is to the motion sensor.
While transmitting the signal indicating their proximity to the
motion sensor, acoustic-sensing robots also listen to hear if any
robot is closer than they are.
“If there are two guys and one says, ‘I’m four hops
away,’ and the other guy says, ‘I’m five hops away,’ when that
five-hop guy receives the signal from the four-hop guy, he shuts up
and says I’m just going to wait,” explains Payton. “Meanwhile that
four-hop guy says, ‘I don’t hear anybody else that’s closer than me
so I’m going.’” This happens automatically and instantaneously.
If the intruder destroys the first acoustic robot, the
next-closest robot—no longer hearing the signal—goes automatically.
“Without any centralized decision making, the swarm automatically
chooses who gets to go there, and it is also very robust in that if
one robot is damaged or can’t go, the next one goes,” says
Payton.
Nanobot sizes introduce opportunities for medical
applications or others where swarm robots are seamlessly attached to
humans. One such application includes injecting robots into people
to perform tasks such as attacking cancer cells, says
Payton.
Multiagent techniques
At the Georgia Institute of Technology, smaller teams
of robots based on a computation model, rather than a pheromone
model, could eventually progress to form larger teams. “Our whole
paradigm is behavior based,” says Ronald C. Arkin, director of Georgia
Tech’s mobile robot lab. Communication is largely accomplished
by robots responding to other robots’ behaviors.
The soldier uses an iconic visual-programming
environment (left inset) and an aerial view of the objective area
(right inset), both available on his laptop screen, to task teams of
robots, such as the one shown in the foreground. The GT Hummer in
the background is the command-and-control vehicle. The system has
coordinated teams of over 10 robots in field tests at Ft. Benning,
Georgia.
In Johns Hopkins University’s Department of Mechanical
Engineering, Greg Chirikjian, department chair, is working on
self-replicating, self-repairing multiagent robots. The
robots work in teams.
“In other words, team members look at other team
members to figure out if they are working properly and, if not, how
they can go in and fix them or take them apart,” says
Chirikjian.
The current working models are small mobile robots
with two wheels, two motors, a small computer, and a small gripper.
They go around, pick up the parts, and put them together. We operate
in two modes, says Chirikjian. In remote-controlled mode, the human
user does the sensing, not the robots. Autonomous mode depends on a
structured environment with parts pre-positioned at locations the
robots know; the robots can then sense landmarks.
Chirikjian expects to see more advanced working models
that can replicate and repair more independently in about a
year.
As the research progresses, the onboard computers will
house the distributed communications technologies, which have not
yet been selected. “What we are currently working on is the group
behavior. We have not settled on a particular technology for the
communication, whether it be optical infrared or radio, [for
example],” says Chirikjian.
Multiagent robots applications
Arkin is applying his research to teams of robot
vehicles for the Naval Air Systems Command (NAVAIR). “We’re talking about unmanned air
vehicles, unmanned ground vehicles, unmanned water surface vehicles,
and unmanned undersea vehicles all working collaboratively together
for a potential range of military scenarios,” says Arkin. This
application is intended for use in the tight seacoast
environments.
Military applications for Chirikjian’s
self-replicating robots include reconnaissance where, if one robot
is hit, the others scavenge for parts for reuse.
These replicators have applications in planetary
exploration as well. “If you send a single robot out and it breaks
down, you’re stuck,” says Chirikjian, “but if you have several
robots that go out as a group and they have the ability to diagnose
each other, the potential exists for much greater autonomy and
robustness.”
Obstacles to implementations
Payton’s pheromone swarms can be scaled down quite a
bit but at a proportional loss in intelligence. “You can’t put your
Pentium V in there,” says Payton, but with the right software
algorithm mechanisms and proper communications, you can get the
swarm as a whole to do some very intelligent things.
The rows in formation-control applications don’t yet
exactly duplicate each other. As when crystals form, each robot
formation is slightly different, “unless they have some kind of
reference point they can use,” says Payton.
According to Arkin, the biggest problems for
multiagent robots are things like communications issues and power
maintenance when robots are active in the field for extended
periods. Communications is a big problem with robots for the
military or the private sector. “There are electronic
countermeasures, there’s jamming,” says Arkin, not to mention the
quality-of-service issues that we’re familiar with from cell
phones.
The military’s current overarching approach to
battle—network-centric warfare—presents a hurdle for Arkin’s work
with NAVAIR. “Undersea vehicles can’t
communicate in the same way as unmanned air vehicles can. There are
time constraints, different capabilities,” he says.
Integrating robots with people also remains a large
problem. Arkin sees robots becoming full-fledged partners with us.
“So, how do we engineer systems that are restricted in terms of
speed because of human [limitations]?” he asks. “How do we take the
best of both human and machine intelligence and [fuse] them into
fully integrated, complex societies of robots and people?”
Just as distributed computation and
multiprocessing changed the paradigm for solving fundamental
computation problems, so will the availability of cheap,
distributed, low-cost robotic assets, says Arkin. You’ll be able to
use the technology to distribute sensors widely and draw information
from vantage points around the globe. This opens the opportunity to
take action at many points at once throughout the world. “You can’t
do that with only a single robot,” says Arkin.
Copyright 2005 IEEE. Reprinted from the IEEE Computer
Society's Computer magazine.
This material is posted here with permission of the
IEEE.? Internal or personal use of this material is
permitted.?However, permission to reprint/republish this material
for advertising or promotional purposes or for creating new
collective works for resale or redistribution must be obtained from
the IEEE by writing to pubs-permissions@ieee.org.
By choosing to view this document, you agree to all
provisions of the copyright laws protecting it.