MultiBoost

author: Balázs Kégl, Laboratoire de l Accélérateur Linéaire, University of Paris-Sud 11
published: July 20, 2010,   recorded: June 2010,   views: 4519
Categories

Slides

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Delicious Bibliography

Description

AdaBoost [Freund-Schapire, 1997] is one of the best off-the-shelf supervised classification methods developed in the last fifteen years. Despite (or perhaps due to?) its simplicity and versatility, it is suprisingly under-represented in the family of open softwares. The goal of this submission is to fill this gap.

Our implementation is based on the AdaBoost.MH algorithm [Schapire-Singer, 1999]. It is an intrinsically multi-class classification method (unlike SVM for example), and it was easy to extend to multi-label or multi-task classification (when one item can belong to several classes). The program package can be divided into four modules that can be changed more-or-less independently depending on the application.

The strong learner. It tells you how to boost. The main boosting engine is AdaBoost.MH, but we have also implemented FilterBoost for a research project. Other possible strong learners could be LogitBoost and ADTrees.

The base (or weak) learner. It tells you what features to boost. Right now we have two basic (feature-wise) base learners: decision stumps for real-valued features and indicators for nominal features. We have two meta base learners: trees and products. They can use any base learner and construct a generic complex base learner using a "classic" tree-structure (decision trees), or using the product of simple base learners (self advertisement: boosting products of stumps is the best reported no-domain-knowledge algorithm on MNIST after Hinton and Salakhutdinov's deep belief nets). We have also implemented Haar filters [Viola-Jones, 2004] for image classification, a meta base learner that uses stumps over a high dimensional feature space computed "on the fly". It is a nice example of a domain dependent base learner that works hand-in-hand with its appropriate data structure.

The data representation. The basic data structure is a matrix of observations with a vector of labels. We also have multi-label classification when the label data is also a full matrix. In addition, we have sparse data representation for both the observation matrix and the label matrix. In general, base learners are implemented to work with their own data representation (for example, sparse stumps work on sparse observation matrices, or Haar filters work on a integral image data representation.

Data parser. We can read in data in arff and svmlight formats.

The base learner/data structure combinations cover a large spectrum of possible applications, but the main advantage of the package is that it is easy (for the advanced user) to adapt MultiBoost to a specific (non-standard) application by implementing the base learner and data structure interfaces that work together.

The source code is available from the website multiboost.org. It can be compiled on Mac OS X, Linux, and Microsoft Windows. The interface is command line execution with switches.

See Also:

Download slides icon Download slides: icml2010_kegl_mubu_01.pdf (116.4 KB)


Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: