Statistical Natural Language Parsing: Reliable Models of Language?

author: Sandiway Fong, Department of Linguistics and Department of Computer Science, University of Arizona
published: Feb. 10, 2012,   recorded: October 2007,   views: 3083
Categories

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Delicious Bibliography

Description

The statistical natural language linguist owes much to the University of Pennsylvania’s famous Treebank project. But this giant corpus of one million words – actually, 49 thousand sentences from the Wall Street Journal all carefully labeled for their syntactic and semantic components -- is actually both a “blessing and a curse,” says Sandiway Fong. This “gold standard” list of parsed sentences, the result of more than a decade of work, has become “the only game in town,”according to Fong. Linguists developing natural language algorithms often rely on the complex Penn Treebank to construct and train probabilistic, context-free grammars, and Fong acknowledges the Treebank’s revolutionary impact on the field. But he also thinks it’ sworthwhile to examine how systems that rely on Penn Treebank actually perform.

He has been exploring three basic questions: Do such systems attain cognitively plausible knowledge of language, such as distinguishing between grammatical and ungrammatical components of sentences? How brittle are these systems, so that if you misspell a word or flip one part of the sentence, the system will “give you back some parse? Can these systems learn non-natural languages?

Fong has unearthed some interesting issues. For instance, two well-known parsing systems couldn’t score more than 50% figuring out the right way to pronounce the word “read” in eight sentences that deployed the past and present tenses (e.g., The girls will read the paper; The girls have read the paper). And the two systems didn’t get the same sentences wrong. Fong wonders if “reading the Wall Street Journal is not a good way to learn how to pronounce ‘read’ or ‘red.’” Fong also demonstrated that a parsing system could be turned on the presence (or absence) of a single example involving the phrase “milk with 4% butterfat,” calling in question whether such systems are truly robust.

While Treebank-based parsing systems demonstrably perform well on Treebank-like sentences, one cannot infer they have necessarily achieved grammatical competence nor linguistic stability. We must understand, says Fong, that 40 thousand training samples do not really provide enough parameters to provide the broad range of linguistic cases for computational systems that ordinary people pick up nearly effortlessly. “We expect statistical systems to be able to deal with noise. But they are extremely fragile, despite their statistical nature and training over a large data set.”

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: