Decision-Theoretic Planning for Multi-Agent Systems: New Directions and Opportunities

Tutorial at IJCAI 2009

An updated version of this tutorial was presented at AAMAS 2010.


Decision theory offers a rich normative framework for planning under uncertainty. Extending this framework to situations that involve a group of decision makers such as teams of robots or arrays of sensors has been challenging. Nevertheless, the past ten years have seen a rapid development of new formal models and algorithms for planning in such multi-agent settings. The new models extend Markov decision processes and game-theoretic frameworks to create plans for decentralized decision-making.

The tutorial surveys these formal models, their properties and computational complexity, as well as state-of-the-art solution techniques. It describes exact and approximate algorithms that use a variety of plan representations such as policy trees or finite-state controllers. It examines the role of communication in these settings and how correlated behavior could be accomplished without communication. It analyzes the performance of these algorithms and describes current research efforts to improve their scalability.

The tutorial is aimed at graduate students and researchers who want to enter this emerging field or to better understand recent results in this area and their implications on the design of multi-agent systems. Participants should have a basic understanding of decision theory and planning under uncertainty using Markov models.


  1. Introduction
  2. Models
    • Single-agent MDPs and POMDPs
    • Decentralized POMDPs
    • Subclasses and complexity
  3. Algorithms
    • Exact algorithms
    • Approximation methods
    • Specialized algorithms for subclasses
  4. Problem domains and software tools
  5. Wrapup


Slides in PDF (11Mb), PDF (bzip2, 3.9Mb) or PDF (zip, 8.1Mb). List of notation used.


Some pointers relevant in the context of this tutorial.