View order







Type of content

 
 
 
 
 
 

Language

 
 
 
 
 
 
 

Year

From:
To:

 


...Search a Keyword

 
 
event header image

Large Scale Graphical Models

Approximate Learning of Large Scale Graphical Models: Theory and Applications

Undirected graphical models provide a powerful framework for representing dependency structure between random variables. Learning the parameters of undirected models plays a crucial role in solving key problems in many machine learning applications, including natural language processing, visual object recognition, speech perception, information retrieval, computational biology, and many others. Learning in undirected graphical models of large treewidth is difficult because of the hard inference problem induced by the partition function for maximum likelihood learning, or by finding the MAP assignment for margin-based loss functions. Over the last decade, there has been considerable progress in developing algorithms for approximating the partition function and MAP assignment, both via variational approaches (e.g., belief propagation) and sampling algorithms (e.g., MCMC). More recently, researchers have begun to apply these methods to learning large, densely-connected undirected graphical models that may contain millions of parameters. A notable example is the learning of Deep Belief Networks and Deep Boltzmann Machines, that employ MCMC strategy to greedily learn deep hierarchical models.


The Workshop homepage can be found at http://www.cs.toronto.edu/~rsalakhu/workshop_nips2009/index.html.


Categories

Write your own review or comment:

make sure you have javascript enabled or clear this field: