Sparse Methods for Machine Learning: Theory and Algorithms

author: Francis R. Bach, INRIA - SIERRA project-team
published: Jan. 19, 2010,   recorded: December 2009,   views: 36741
Categories

Slides

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Delicious Bibliography

 Watch videos:   (click on thumbnail to launch)

Watch Part 1
Part 1 1:00:34
!NOW PLAYING
Watch Part 2
Part 2 1:00:10
!NOW PLAYING

Description

Regularization by the L1-norm has attracted a lot of interest in recent years in statistics, machine learning and signal processing. In the context of least-square linear regression, the problem is usually referred to as the Lasso or basis pursuit. Much of the early effort has been dedicated to algorithms to solve the optimization problem efficiently, either through first-order methods, or through homotopy methods that leads to the entire regularization path (i.e., the set of solutions for all values of the regularization parameters) at the cost of a single matrix inversion. A well-known property of the regularization by the L1-norm is the sparsity of the solutions, i.e., it leads to loading vectors with many zeros, and thus performs model selection on top of regularization. Recent works have looked precisely at the model consistency of the Lasso, i.e., if we know that the data were generated from a sparse loading vector, does the Lasso actually recover the sparsity pattern when the number of observations grows? Moreover, how many irrelevant variables could we consider while still being able to infer correctly the relevant ones? The objective of the tutorial is to give a unified overview of the recent contributions of sparse convex methods to machine learning, both in terms of theory and algorithms. The course will be divided in three parts: in the first part, the focus will be on the regular L1-norm and variable selection, introducing key algorithms and key theoretical results. Then, several more structured machine learning problems will be discussed, on vectors (second part) and matrices (third part), such as multi-task learning, sparse principal component analysis, multiple kernel learning and sparse coding.

See Also:

Download slides icon Download slides: nips09_bach_smm.pdf (6.2 MB)


Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: