Human Simulations of Language Learning

author: Michael Coen, Department of Biostatistics and Medical Informatics, University of Wisconsin-Madison
author: Lila Gleitman, psychology department, University of Pennsylvania
published: Feb. 10, 2012,   recorded: October 2007,   views: 3264
Categories

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Delicious Bibliography

Description

This workshop, explains Michael Coen, is an effort to engender temperate, collaborative discussion of a matter that inspires hot dispute: whether machine learning helps explain how humans acquire language. In particular, says Coen, machine learning advocates believe they have evidence against Noam Chomsky’s “poverty of stimulus argument,” which in essence states that language is built into us, that “children don’t receive enough linguistic inputs to explain linguistic outputs.”

Coen, who doesn’t think much of such claims, worries about a deeper problem, that scientists have “begun to discuss engineering at the expense of science.” He describes 13-year-old Bobby Fischer’s astonishing match with a world chessmaster, where Fischer managed to look 16 moves ahead -- eliminating about 10 to the 30th board positions. We had no way to represent his thinking process then, and we don’t today, although scientists have built a machine, Deep Blue, that can topple any human chess champion. It seems there’s nothing left to say about chess, yet we know absolutely nothing about how humans play chess, says Coen. “If you’re an engineer, this may be fine, but if you’re a scientist, that’s deeply troubling.”

One problem with machine models, says Lila Gleitman, is that “they don’t try to learn what the human already knows,” and we really aren’t sure “how big a piece of the pie that is in the first place.” Gleitman distinguishes between acquiring language, and acquiring *a* language, like French or German. In her years of researching how children learn language, and specifically children who have been deprived of linguistic input entirely, Gleitman does not find a blank slate: “Children don’t just sit there; they start to make gestures.” Gleitman reviews various studies that describe a basic sequence in language acquisition that holds true regardless of specific ‘inputs.’ If researchers make models that are to be “of any interest, they ought to take into account the fact that you may not have to learn some of this.”

Gleitman has conducted simulations with adults, giving them incomplete scenes on video or paper (dropping words or substituting Lewis Carroll type doggerel) to see how we acquire the meaning of common nouns and verbs through contextual clues and inference. The more sources of evidence people get in these tests, the better they do. But such language acquisition “doesn’t scale up” to higher level categories of words,” such as “think.” Says Gleitman, “It’s crazy…to suppose there’s no biological given in a language learning situation. There’s plenty. Some of it is maybe the substance of language and some of that is about the sophisticated learning procedures themselves.” So any kind of “informative statistical modeling requires a matrix of conspiring cues, intrinsically ordered in time of appearance…Realistic models of incremental learning will incorporate what the learner brings to the task.”

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: