Survey of Boosting from an Optimization Perspective

author: Manfred K. Warmuth, Department of Computer Science, University of California Santa Cruz
author: S.V.N. Vishwanathan, Department of Computer Science, University of California Santa Cruz
published: Aug. 26, 2009,   recorded: June 2009,   views: 4873
Categories

See Also:

Download slides icon Download slides: icml09_warmuth_vishwanathan_sbop.pdf (6.6 MB)


Help icon Streaming Video Help

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Delicious Bibliography

 Watch videos:   (click on thumbnail to launch)

Watch Part 1
Part 1 1:27:44
!NOW PLAYING
Watch Part 2
Part 2 50:41
!NOW PLAYING

Description

Boosting has become a well known ensemble method. The algorithm maintains a distribution on the ±-labeled examples and a new base learner is added in a greedy fashion. The goal is to obtain a small linear combination of base learners that clearly separates the examples. We focus on a recent view of Boosting where the update algorithm for distribution on the examples is characterized by a minimization problem that uses a relative entropy as a regularization. The most well known boosting algorithms is AdaBoost. This algorithm approximately maximizes the hard margin, when the data is separable. We focus on recent algorithms that provably maximize the soft margin when the data is noisy. We will teach the new algorithms, give a uni ed and versatile view of Boosting in terms of relative entropy regularization, and show how to solve large scale problems based on state of the art optimization techniques. Our goal is to motivate people to mimic the recent successes of the SVM community for scaling up the solvable problem size. This goal is challenging because in Boosting the regularization (relative entropy) is more complicated than the one used for SVMs (squared Euclidean distance). Nevertheless we can solve dense problems with 200K examples in less than a minute on a laptop.

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Reviews and comments:

Comment1 philip torr, September 24, 2010 at 11:31 p.m.:

excellent


Comment2 stevenstrange, January 24, 2024 at 11:26 a.m.:

Attention all shoppers! Loblaws has exciting news for you. The StoreOpinion.Ca survey has now been moved to Storeopinion-ca.me. And that's not all – Loblaws is offering an incredible reward for survey participants. By taking part in the survey on Loblaw's official survey page at https://storeopinion-ca.me/, you stand a chance to win a remarkable $1000 Optimum Pc gift card. Don't miss out on this amazing opportunity to be a winner!

Write your own review or comment:

make sure you have javascript enabled or clear this field: