|
Decision Making in Multiagent Settings
|
Abstract
Decision making is a key capability of
autonomous agents. Drawing motivation,
in part, from search and rescue
applications in disaster management,
the tutorial will span the range of
multiagent interactions of increasing
generality, and study a set of optimal
and approximate solution techniques to
time-extended decision making in both
noncooperative and cooperative
multiagent contexts. This
self-contained tutorial will begin
with the relevant portions of game
theory and culminate with several
advanced decision-theoretic models of
agent interactions.
The tutorial is aimed at graduate
students and researchers who want to
enter this emerging field or to better
understand recent results in this area
and their implications on the design
of multi-agent systems. Participants
should have a basic understanding of
decision theory and planning under
uncertainty using Markov models.
|
Outline
- Part I Game theory, individual
decision making and uncertainty
utilization. Speakers: Prashant,
Zinovi and Piotr.
- Search and Rescue Applications
in Disaster Management
- Requirements for the multiagent decision model and solution
- Repeated strategic games of complete information
- Repeated Bayesian games
- Fictitious play
- Partially Observable Stochastic Games
- Special case: MDP and POMDP
- Interactive POMDPs (I-POMDPs)
- Uncertainty utilization
- Part II: Team decision making. Speakers: Matthijs, Chris and Shlomo.
- Multiagent decision-theoretic
planning: introduction, related
work, examples.
- Models: Dec-POMDP model,
complexity, special cases.
- Algorithms: optimal value
functions, dynamic programming,
JESP, GMAA*, MBDP,
infinite-horizon algoritms,
exploiting locality of
interaction, communication.
- Application problem domains and software tools
|
Material
|
|
|
|
|