Abstract
Decision theory offers a rich
normative framework for planning under
uncertainty. Extending this framework
to situations that involve a group of
decision makers such as teams of
robots or arrays of sensors has been
challenging. Nevertheless, the past
ten years have seen a rapid
development of new formal models and
algorithms for planning in such
multi-agent settings. The new models
extend Markov decision processes and
game-theoretic frameworks to create
plans for decentralized
decision-making.
The tutorial surveys these formal
models, their properties and
computational complexity, as well as
state-of-the-art solution
techniques. It describes exact and
approximate algorithms that use a
variety of plan representations such
as policy trees or finite-state
controllers. It examines the role of
communication in these settings and
how correlated behavior could be
accomplished without communication. It
analyzes the performance of these
algorithms and describes current
research efforts to improve their
scalability.
The tutorial is aimed at graduate
students and researchers who want to
enter this emerging field or to better
understand recent results in this area
and their implications on the design
of multi-agent systems. Participants
should have a basic understanding of
decision theory and planning under
uncertainty using Markov models.
|