Regularization and Computations: Early stopping for Online Learning

author: Lorenzo Rosasco, Massachusetts Institute of Technology, MIT
published: Oct. 6, 2014,   recorded: December 2013,   views: 1683
Categories

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Delicious Bibliography

Description

Early stopping is one of the most appealing heuristics when dealing with big data, since the computational resources required for learning are directly linked to the desired generalization properties. Interestingly, the theoretical foundations of learning with early stopping have only recently been developed and only for the case of the classical batch gradient descent.

In this talk, we discuss and analyze the potential impact of early stopping for online learning in a stochastic setting. More precisely, we study the estimator defined by the incremental gradient descent of the (unregularized) empirical risk and show that it’s universally consistent when provided with a universal step-size, and a suitable early stopping rule. Our results shed light on the need of considering several passes over the data (epochs) in online learning.

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: