Sparse Algorithms are Not Stable: A No-free-lunch Theorem

author: Huan Xu, Department of Mechanical Engineering, National University of Singapore
published: Jan. 16, 2013,   recorded: December 2012,   views: 3843
Categories

Slides

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Delicious Bibliography

Description

We consider two widely used notions in machine learning, namely: sparsity and stability. Both notions are deemed desirable, and are believed to lead to good generalization ability. We show that these two notions contradict each other: a sparse algorithm can not be stable and vice versa. Thus, one has to tradeoff sparsity and stability in designing a learning algorithm. This implies that, in contrast to \ell_2 regularized regression, \ell_1 regularized regression (Lasso) cannot be stable.

See Also:

Download slides icon Download slides: nipsworkshops2012_xu_theorem_01.pdf (209.2 KB)


Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: