Graphical Models for Speech Recognition: Articulatory and Audio-Visual Models

author: Karen Livescu, Toyota Technological Institute at Chicago
published: July 30, 2009,   recorded: June 2009,   views: 7134
Categories

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Delicious Bibliography

Description

Since the 1980s, the main approach to automatic speech recognition has been using hidden Markov models (HMMs), in which each state corresponds to a phoneme or part of a phoneme in the context of the neighboring phonemes. Despite their crude approximation of the speech signal, and the large margin for improvement still remaining, HMMs have proven difficult to beat. In the last few years, there has been increasing interest in more complex graphical models for speech recognition, involving multiple streams of states. I will describe two such approaches, one modeling pronunciation variation as the result of the "sloppy" behavior of articulatory variables (the states of the lips, tongue, etc.) and the other modeling the audio and visual states in audio-visual speech recognition (i.e. recognition enhanced by "lipreading").

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: