AdaBoost is Universally Consistent

author: Peter L. Bartlett, UC Berkeley
published: Feb. 25, 2007,   recorded: August 2006,   views: 8238
Categories

Slides

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Delicious Bibliography

Description

We consider the risk, or probability of error, of the classifier produced by AdaBoost, and in particular the stopping strategy to be used to ensure universal consistency. (A classification method is universally consistent if the risk of the classifiers it produces approaches the Bayes risk---the minimal risk---as the sample size grows.) Several related algorithms---regularized versions of AdaBoost---have been shown to be universally consistent, but AdaBoost's universal consistency has not been established. Jiang has demonstrated that, for each probability distribution satisfying certain smoothness conditions, there is a stopping time for sample size n, so that if AdaBoost is stopped after iterations, its risk approaches the Bayes risk for that distribution. Our main result is that if AdaBoost is stopped after iterations, it is universally consistent, where n is the sample size and .

See Also:

Download slides icon Download slides: mlss06tw_bartlett_auc.pdf (75.9 KB)


Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: