Norm Regularization: Error Bounds and Convergence Rate Analysis of First-Order Methods

author: Zirui Zhou, Chinese University of Hong Kong
published: Dec. 5, 2015,   recorded: October 2015,   views: 1597
Categories

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Delicious Bibliography

Description

Recently, ℓ1,p-regularization has been widely used to induce structured sparsity in the solutions to various optimization problems. Motivated by the desire to analyze the convergence rate of first-order methods, we show that for a large class of ℓ1,p-regularized problems, an error bound condition is satisfied when p∈[1,2] or p=∞ but fails to hold for any p∈(2,∞). Based on this result, we show that many first-order methods enjoy an asymptotic linear rate of convergence when applied to ℓ1,p-regularized linear or logistic regression with p∈[1,2] or p=∞. By contrast, numerical experiments suggest that for the same class of problems with p∈(2,∞), the aforementioned methods may not converge linearly.

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: