Robust Bounds for Classification via Selective Sampling

author: Francesco Orabona, Toyota Technological Institute at Chicago
published: Aug. 26, 2009,   recorded: June 2009,   views: 3867
Categories

Slides

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Delicious Bibliography

Description

We introduce a new algorithm for binary classification in the selective sampling protocol. Our algorithm uses Regularized Least Squares (RLS) as base classifier, and for this reason it can be efficiently run in any RKHS. Unlike previous margin-based semi-supervised algorithms, our sampling condition hinges on a simultaneous upper bound on bias and variance of the RLS estimate under a simple linear label noise model. This fact allows us to prove performance bounds that hold for an arbitrary sequence of instances. In particular, we show that our sampling strategy approximates the margin of the Bayes optimal classifier to any desired accuracy $\ve$ by asking $\widetilde{\scO}\bigl(d/\ve^2\bigr)$ queries (in the RKHS case $d$ is replaced by a suitable spectral quantity). While these are the standard rates in the fully supervised i.i.d.\ case, the best previously known result in our harder setting was $\widetilde{\scO}\bigl(d^3/\ve^4\bigr)$. Preliminary experiments show that some of our algorithms also exhibit a good practical performance.

See Also:

Download slides icon Download slides: icml09_orabona_rbcs_01.pdf (510.6 KB)


Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: