Approximate Inference in Natural Language Processing

author: Noah Smith, Language Technologies Institute, Carnegie Mellon University
published: Jan. 19, 2010,   recorded: December 2009,   views: 5086
Categories

Slides

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Delicious Bibliography

Description

I'll start out by presenting an idealized version of the natural language processing problem of parsing. I will brazenly suggest that most of NLP is reducible to variations on parsing problems. I'll show how dynamic programming solves the idealized version of the problem, both for calculating modes and marginals over parse trees, exploiting some key independence assumptions about the structure of natural language sentences.

I will then discuss two approximate inference methods that let us build more powerful models of parsing. Neither comes with strong theoretical guarantees, but both are demonstrated to perform strongly in experiments on real NLP data. The first method builds on the dynamic programming representation, combining max-product and sum-product methods to produce, approximately, the k-best parses and a residual sum over the rest of the parses, useful when incorporating features that violate the usual independence assumptions. Experiments validate the approach with a discriminative model for machine translation.

The second method turns a parsing problem instance into a concise integer linear program. Approximate inference is then accomplished using well-known linear program relaxation. This is embedded in a new online learning algorithm that tries to penalize uninterpretable fractional solutions (and therefore inference cost at evaluation time). We show that this approach leads to state-of-the-art parsing performance on seven languages, with improved speed for both exact and approximate inference and no significant performance loss.

See Also:

Download slides icon Download slides: nipsworkshops09_smith_ainlp_01.pdf (5.9 MB)


Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: