Scalable Tensor Decompositions for Learning Hidden Variable Models

author: Sham M. Kakade, Microsoft Research New England, Microsoft Research
published: May 28, 2013,   recorded: September 2012,   views: 3544
Categories

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Delicious Bibliography

Description

In many applications, we face the challenge of modeling the interactions between multiple observations. A popular and successful approach in machine learning and AI is to hypothesize the existence of certain latent (or hidden) causes which help to explain the correlations in the observed data. The (unsupervised) learning problem is to accurately estimate a model with only samples of the observed data. For example, in document modeling, we may wish to characterize the correlational structure of the "bag of words" in documents. Here, a standard model is to posit that documents are about a few topics (the hidden variables) and that each active topic determines the occurrence of words in the document. The learning problem is, using only the observed words in the documents (and not the hidden topics), to estimate the topic probability vectors (i.e. discover the strength by which words tend to appear under different topcis). In practice, a broad class of latent variable models is most often fit with either local search heuristics (such as the EM algorithm) or sampling based approaches.

This talk will discuss how generalizations of standard linear algebra tools (e.g. spectral methods) to tensors provide provable and efficient estimation methods for various latent variable models (under appropriate assumptions), including mixtures of Gaussians models, hidden Markov models, topic models, latent Dirichlet allocation, latent parse tree models (PCFGs and dependency parsers), and models for communities in social networks. The talk will also briefly discuss how matrix and tensor decomposition methods can be used for the structure learning problem of determining both the existence of certain hidden causes and the underlying graphical structure between these hidden causes and the observed variables.

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: