Machine Learning of Language from Distributional Evidence

author: Christopher Manning, Computer Science Department, Stanford University
published: Feb. 10, 2012,   recorded: October 2007,   views: 3314
Categories

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Delicious Bibliography

Description

Christopher Manning thinks linguistics went astray in the 20th century when it searched “for homogeneity in language, under the misguided assumption that only homogeneous systems can be structured.” In the face of human creativity with language, rigid categories of linguistic use just don’t help explain how people actually talk and what they choose to say. For every hard and fast rule linguists find, other linguists can determine an exception. Categorical constraints rise, then come crashing down.

Manning argues for acceptance of variable systems of language, and for searching for structure in these systems using probabilistic methods. Manning applies quantitative techniques to sentence structure, digging for the frequency, probability and likelihood that people will use specific turns of phrase in certain real-world contexts. Looking at distributions in the ways people express ideas in a language “can give a much richer description of how language is used.” Indeed, Manning finds that certain typical constraints on sentence structure in one language “show up as softer constraints and preferences in other languages.”

Manning looks at raw data, like sentences from the Wall Street Journal, and gleans such information as typical word associations that begin to “tell us about the dependencies of verbs and arguments.” He looks for dependencies between words, the distance between them, and at a sentence’s flow from left to right. Classes of words emerge, and clusters, yielding distributionally learned categories. Certain classes of syntax naturally fall together. Manning builds nested phrase structure trees, and branching structures, and derives simple probabilistic models that help explain “gradual learning and robustness in acquisition, non-homogeneous grammars of individuals, and gradual language change over time.” Manning says computational linguistics is also proving useful in such applied fields as information retrieval, machine translation, and text mining.

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: