Tutorial on Learning Deep Architectures

author: Yoshua Bengio, Department of Computer Science and Operations Research, University of Montreal
author: Yann LeCun, Computer Science Department, New York University (NYU)
published: Aug. 26, 2009,   recorded: June 2009,   views: 41199
Categories

Slides

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Delicious Bibliography

 Watch videos:   (click on thumbnail to launch)

Watch Part 1
Part 1 34:41
!NOW PLAYING
Watch Part 2
Part 2 27:10
!NOW PLAYING

Description

This short tutorial on deep learning will review a variety of methods for learning multi-level, hierarchical representations, emphasizing their common traits. While deep architectures have theoretical advantages in terms of expressive power and efficiency of representation, they also provide a possible model for information processing in the mammalian cortex, which seems to rely on representations with multiple levels of abstractions. A number of deep learning methods have been proposed since 2005, that have yielded surprisingly good performance in several areas, particularly in vision (object recognition), and natural language processing. They all learn multiple levels of representation using some form of unsupervised learning. Hypotheses to explain why these algorithms work well will be discussed in the light of new experimental results. Many of these algorithms can be cast in the framework of the energy-based view of unsupervised learning, which generalizes graphical models used as building blocks for deep architectures, such as the Restricted Boltzmann Machines (RBM) and variations of regularized auto-encoders. Old and new algorithms will be presented for training, sampling, and estimating the partition function of RBMs and Deep Belief Networks. Applications of deep architectures to computer vision and natural language processing will be described. A number of open problems and future research avenues will be discussed, with active participation from the audience.

See Also:

Download slides icon Download slides: icml09_bengio_lecun_tldar.pdf (2.6 MB)


Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Reviews and comments:

Comment1 derbal, February 11, 2010 at 1:35 p.m.:

that is good


Comment2 Bob, August 20, 2014 at 1:24 p.m.:

The constant (auto?) refocusing on this is annoying.

Write your own review or comment:

make sure you have javascript enabled or clear this field: