Unifying Subspace and Distance Metric Learning with Bhattacharyya Coefficient for Image Classification
published: Dec. 5, 2008, recorded: November 2008, views: 791
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
In this talk, we propose a unified scheme of subspace and distance metric learning under the Bayesian framework for image classification. According to the local distribution of data, we divide the k-nearest neighbors of each sample into the intra-class set and the inter-class set, and we aim to learn a distance metric in the embedding subspace, which can make the distances between the sample and its intra-class set smaller than the distances between it and its inter-class set. To reach this goal, we consider the intra-class distances and the inter-class distances to be from two different probability distributions respectively, and we model the goal with minimizing the overlap between two distributions.
Inspired by the Bayesian classification error estimation, we formulate the objective function by minimizing the Bhattachyrra coefficient between two distributions. We further extend it with the kernel trick to learn nonlinear distance metric. The power and generality of the proposed approach are demonstrated by a series of experiments on the CMU-PIE face database, the extended YALE face database, and the COREL-5000 nature image database.
Download slides: etvc08_metaxas_usadm_01.ppt (3.7 MB)
Download slides: etvc08_metaxas_usadm_02.ppt (610.0 KB)
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !