Network Science: Models and
Distributed Algorithms (18799 H/PH)
IST-CMU PhD Course
Fall 2016
Contact
Instructor: João
Xavier
jxavier (@) isr.ist.utl.pt
http://users.isr.ist.utl.pt/~jxavier
TA: João Martins
(joaoa ( @ ) andrew.cmu.edu)
Announcements
- There is no lecture on
Wednesday, Oct 19.
- The office hours at Lisbon are on Thursdays
12h30-13h30, North Tower, 7th floor.
- There is no lecture on
Wednesday, Oct. 5.
-
Lectures start on September 12, 2016, and end on
December 7, 2016. The schedule for the
lectures is: Mondays (12h30-14h00 Pittsburgh = 17h30-19h00 Lisbon)
and Wednesdays (12h30-14h00 Pittsburgh = 17h30-19h00 Lisbon).
The classroom at IST campus is the CMU Videoconference Room at
Pavilhão de Engenharia Civil (room V0.15). The classroom at CMU is
HH1107.
Course
info
Description: Multi-agent
systems model the approaching networks of devices that sense, compute
and communicate: the Internet of Things, wireless camera networks, smart
grids, vehicular networks, teams of cooperative robots, computer
networks for distributed machine learning.
Commonly, agents (say, a tiny sensor, a robot, or a
computer) take local measurements and need to extract global information
from the local datasets---finding a target position from several
range measurements in a team of robots, reconstructing the state of a
cyber-physical system from wireless sensor network readings, or
learning a classifier from distributed datasets in Big Data applications.
To extract information from the local datasets, we need distributed
processing: the centralized paradigm---a central node receives all local
datasets and computes the global information---does not scale to the
massive size of emergent systems.
In distributed processing, no central node exists; agents collaborate with
neighbors to reproduce the centralized solution.
The PhD Network Science course covers the latest tools from the boiling
field of distributed processing, both for static and dynamic networks.
Part 1: static networks
1. Background: graphs and Perron-Frobenius theory
2. Consensus with undirected and directed communications
3. Distributed optimization: convex and nonconvex methods
4. Distributed detection and estimation
Part 2: dynamic networks
1. Background: random matrix theory and martingales
2. Consensus with undirected and directed communications
3. Distributed optimization: convex and nonconvex methods
4. Distributed detection and estimation
Grading: 60% (homeworks) + 40% (24h
take-home exam)
Lectures
1. Course overview
2. Consensus in static
undirected networks
3. Consensus in static directed
networks
4. Optimization in static
undirected networks
5. Estimation in static
undirected networks
6. Detection in static undirected networks (to appear)
7. Consensus in dynamic undirected networks (to
appear)
8. Optimization in dynamic undirected networks (to
appear)
Homeworks
Homework 1
Homework 2
Homework 3
Homework 4
Homework 5