Mistake bounds and risk bounds for on-line learning algorithms

author: Nicolò Cesa-Bianchi, University of Milan
published: Feb. 25, 2007,   recorded: October 2005,   views: 3120
Categories

Slides

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Delicious Bibliography

Description

In statistical learning theory, risk bounds are typically obtained via the manipulation of suprema of empirical processes measuring the largest deviation of the empirical risk from the true risk in a class of models. In this talk we describe the alternative approach of deriving risk bounds for the ensemble of hypotheses obtained by running an arbitrary learning algorithm in an-on line fashion. This allows us to replace the uniform large deviation argument with a simpler argument based on the analysis of the empirical process engendered by the on-line learner. The large deviations of such empirical processes are easily controlled by a single application of Bernstein's inequality for martingales, and the resulting risk bounds exhibit strong data-dependence.

See Also:

Download slides icon Download slides: mcslw04_bianchi_mbrbl_01.pdf (79.6 KB)


Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: