Kullback-Leibler Divergence Estimation of Continuous Distributions

author: Fernando Perez-Cruz, Princeton University
published: Feb. 25, 2008,   recorded: December 2007,   views: 9802
Categories

Slides

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Delicious Bibliography

Description

We present a universal method for estimating the KL divergence between continuous densities and we prove it converges almost surely. Divergence estimation is typically solved estimating the densities first. Our main result shows this intermediate step is unnecessary and that the divergence can be either estimated using the empirical cdf or k-nearest-neighbour density estimation, which does not converge to the true measure for finite k. The convergence proof is based on describing the statistics of our estimator using waiting-times distributions, as the exponential or Erlang. We illustrate the proposed estimators and show how they compare to existing methods based on density estimation, and we also outline how our divergence estimators can be used for solving the two-sample problem.

See Also:

Download slides icon Download slides: ripd07_cruz_kld_01.pdf (260.9 KB)


Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Reviews and comments:

Comment1 Roman Belavkin, August 13, 2008 at 11:07 a.m.:

Slide 2: Generally, when x is continuous, the KL-divergence can be infinite, even if P(dx) is a.c. w.r.t Q(dx).

Write your own review or comment:

make sure you have javascript enabled or clear this field: