Averaging algorithms and distributed optimization

author: John N. Tsitsiklis, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, MIT
published: Jan. 13, 2011,   recorded: December 2010,   views: 7936
Categories

Slides

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Delicious Bibliography

Description

In distributed averaging and consensus algorithms, processors exchange and update certain values (or "estimates", or "opinions") by forming a local average with the values of their neighbors. Under suitable conditions, such algorithms converge to consensus (every processor ends up holding the same value) or even average-consensus (consensus is achieved on the average of the initial values held by the processors). Algorithms of this type have been proposed as a subroutine of distributed optimization methods, used to combine the results of different processors while a master algorithm is running. We overview a few applications of averaging algorithms, with a focus on gradient-like optimization methods. We then proceed to highlight some results, old and new, with a focus on convergence rates. We finally discuss some open problems.

See Also:

Download slides icon Download slides: nipsworkshops2010_tsitsiklis_aad_01.pdf (4.6 MB)


Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: