Structured Prediction: Maximum Margin Techniques

author: Nathan Ratliff, Robotics Institute, School of Computer Science, Carnegie Mellon University
published: Feb. 7, 2008,   recorded: January 2008,   views: 7915
Categories

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Delicious Bibliography

Description

Traditionally there has been a mismatch between the requirements of nontrivial applications and the prediction tools offered by machine learning. Applications such as natural language processing, optical character recognition, and path planning are often implemented in terms combinatorial inference algorithms, such as parsing algorithms, Viterbi decoding, and A* planning. These inference algorithms necessarily utilize the inherent structure of the problem to efficiently navigate an exponential number of target elements such as the set of all parse trees for a sentence, the set of possible words of a particular length, or the set of all paths between two points in a graph. On the other hand, research into supervised learning techniques in machine learning and statistics has focused primarily on regression and classification algorithms which at best handle only a handful of classes. These techniques cannot be applied directly to most applications. Typically, engineers are required to meticulously define learnable subproblems by inducing independence assumptions which are often strongly violated in practice.

In recent years, however, the advent of conditional random fields, and then maximum margin structured classification, has changed the way the machine learning community views these problems. Researchers have found ways in which the inherent structure in the problems can be used to directly train these combinatorial inference procedures. Dubbed structured prediction, this class of algorithms utilizes the same implicit structural properties that make the inference algorithms efficient.

In this presentation, after introducing structured prediction at a high level, I will cover in detail one of the two most cited formalisms of structured prediction: maximum margin structured classification. With a particular emphasis placed on functional gradient techniques, I will present a number of algorithms for solving these problems along with their results on various applications and a discussion of relative trade-offs.

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: