Dirichlet Processes and Nonparametric Bayesian Modelling
Slides
Related content
Report a problem or upload files
If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Description
Bayesian modeling is a principled approach to updating the degree of belief in a hypothesis given prior knowledge and given available evidence. Both prior knowledge and evidence are combined using Bayes' rule to obtain the a posterior hypothesis. In most cases of interest to machine learning, the prior knowledge is formulated as a prior distribution over parameters and the evidence corresponds to the observed data. By applying Bayes' formula we can perform inference about new data. Having observed sufficient data, the a posteriori parameter distribution is increasingly concentrated and the influence of the prior distribution diminishes. Under some assumptions (in particular that the likelihood model is correct and that the true parameters have positive a priori probability), the a posteriori distribution converges to a point distribution located at the true parameters. The challenges in Bayesian modeling are, first, to find suitable application specific statistical models and, second, to (approximately) solve the resulting inference equations.
Link this page
Would you like to put a link to this lecture on your homepage?Go ahead! Copy the HTML snippet !
Reviews and comments:
This is a great first introduction to Dirichlet distributions and processes which takes care to cover all the necessary background concepts and notation. A lot of the content will probably be very familiar to people who have any background in ML but the lectures are sufficiently well structured to make it easy to skip these bits and I have to commend the lecturer on his thoroughness.
really nice talk..beautiful and clear explanation. Makes for great viewing in conjunction with McKay's and Ghahramani's tutorials on GP and Non-parametric Bayesian modeling.
I totally agree with Tom. I have viewed the video lectures of Teh, Jordan and Ghahramani, but still can not understand many practical issues and the motivations of the algorithms. Tresp's lecture provides excellent explanations for my questions. This is absolutely the must-see lecture for beginners.
The flash player for the site is *terrible*. It doesnt seem to have a clue how to properly buffer a stream. It makes for a very frustrating viewing experience.
The second video in this series is not correctly synchronized. The video frequently skips in time while the audio remains constant, the video will flash back and forth between two different slides while the lecturer is constant in front of the screen, while the audio continues to talk about a slide which is not on either slide. It makes the video VERY difficult to follow. Please correct this if possible.
Agree with Chris and Brain about video quality, but it was a good lecture for beginners like me. However, I would have appreciated more elaboration of the formal definitions, which went by very quickly with no explanation of measurable space. And if anybody of a good beginners tutorial on variational methods, including mean field approximation, please let me know. It would really help if the tutorial worked through some simple examples in detail.
In the second part, the video doesn't match the spoken text and displayed slides. This is very frustrating ...
Lecture is indeed very good!
Write your own review or comment: