Recent Advances in Bayesian Methods

author: Jun Zhu, Department of Computer Science and Technology, Tsinghua University
published: March 27, 2014,   recorded: November 2013,   views: 5195
Categories

Slides

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Delicious Bibliography

 Watch videos:   (click on thumbnail to launch)

Watch Part 1
Part 1 1:09:22
!NOW PLAYING
Watch Part 2
Part 2 48:48
!NOW PLAYING
Watch Part 3
Part 3 36:55
!NOW PLAYING

Description

This year marks the 250th Anniversary of Bayes’ theorem, which is playing an increasingly important role in statistical applications. Existing Bayesian models, especially nonparametric Bayesian methods, rely heavily on specially conceived priors to incorporate domain knowledge for discovering improved latent representations. While priors can affect posterior distributions through Bayes' theorem, recent work has shown that imposing posterior regularization is arguably more direct and in some cases can be more natural and easier. This tutorial will consist of two parts. First, I will review the recent developments of parametric and nonparametric Bayesian methods, with examples of Gaussian processes for regression, Dirichlet processes for clustering, and Indian buffet processes for latent feature learning. In the second part, I will introduce the connections between Bayes’ theorem and the principle of relative entropy minimization. In particular, I will introduce regularized Bayesian inference (RegBayes), a computational framework to perform posterior inference with regularization on the desired post-data posterior distributions. When the convex regularization is induced from a linear operator on the posterior distributions, RegBayes can be solved with convex analysis theory. Furthermore, I will present some concrete examples, including MedLDA for learning discriminative topic representations; infinite latent support vector machines for learning discriminative latent features for classification; and others on social network analysis, matrix factorization, multi-task learning, etc. All these models explore the large-margin idea in combination with a (nonparametric) Bayesian model for discovering predictive latent representations. I will discuss both variational and Monte Carlo methods for inference.

See Also:

Download slides icon Download slides: acml2013_zhu_bayesian_methods.pdf (8.7 MB)


Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: