Modern Convex Optimization Methods for Large-scale Empirical Risk Minimization

author: Mark Schmidt, Department of Computer Science, University of British Columbia
published: Dec. 5, 2015,   recorded: October 2015,   views: 4061
Categories

See Also:

Download slides icon Download slides: icml2015_schmidt_risk_minimization.pdf (7.3 MB)


Help icon Streaming Video Help

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Delicious Bibliography

Description

This tutorial reviews recent advances in convex optimization for training (linear) predictors via (regularized) empirical risk minimization. We exclusively focus on practically efficient methods which are also equipped with complexity bounds confirming the suitability of the algorithms for solving huge-dimensional problems (a very large number of examples or a very large number of features). The first part of the tutorial is dedicated to modern primal methods (belonging to the stochastic gradient descent variety), while the second part focuses on modern dual methods (belonging to the randomized coordinate ascent variety). While we make this distinction, there are very close links between the primal and dual methods, some of which will be highlighted. We shall also comment on mini-batch, parallel and distributed variants of the methods as this is an important consideration for applications involving big data.

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: