Computers with Commonsense: Artificial Intelligence at the MIT Round Table

author: Patrick Henry Winston, Department of Mathematics and Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, MIT
published: Aug. 12, 2010,   recorded: June 2009,   views: 5045
Categories

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Delicious Bibliography

Description

Visiting the San Diego Zoo’s orangutans and chimpanzees inspires Patrick Henry Winston to ponder what makes humans different from our primate cousins. His field of artificial intelligence extends that question to thinking about how humans differ from computers. Winston’s goal is to “develop a computational theory of intelligence.”

Bridging the gap from people to machines requires a complex understanding of how we think. Winston asserts we think with our eyes, our hands, our mouth. Humans rely upon visual, motor, and linguistic faculties to learn and solve problems. Perceptual powers enable naming, describing, categorizing and recalling. In the aggregate, these processes are “commonsense,” a hallmark of cognition that Winston aims to vest in computer programs -- to endow transistors with the nuanced capabilities of neurons.

Crucially, we also think with our stories. Throughout childhood and formal education, we are taught via fairy tales, myths, history, literature, religion, and popular entertainment. Professional disciplines like law, science, medicine, engineering, and business are conveyed through stories too.

Recognizing patterns, relationships, and mistakes, as well as abstract concepts like revenge or success, helps us explain, predict, answer questions. The delicate processes of extracting knowledge and capturing meaning may appear seamless or instinctive in the evolved mind, but must be parsed syntactically to “teach” a computer to achieve the same ends.

What might be practical applications “for systems that understood stories”? Winston suggests that decision-making in business and military strategy would benefit. And no less, comprehending cultures. If a computer program could derive clues from context, perhaps it could determine why “what plays in Peoria” doesn’t translate to Baghdad.

Early efforts to build a computational theory of intelligence focused on “symbolic integration…We figured out how to make programs do calculus by 1960…but computers remained as dumb as stones,” Winston says. When we progressed to building robots -- “things that move” -- language was still lacking. “We forgot that the distinguishing characteristic of human intelligence is that linguistic veneer that stands above our perceptual apparatus,” he remarks.

A paradox emerging from Winston’s study of how humans think is that “computers make us stupid.” For instance, when students are freed from taking notes, absence of “forced engagement” with the material hinders learning. He cautions that teachers confuse the “presentation of information with the delivery of information.” Too many words on a slide (or talking too fast) “jams the language processor” and impedes digesting content.

Winston summarizes with an appealing prescription for becoming smarter. “Take notes…draw pictures…talk and imagine…tell stories!” The very act of explaining to another elucidates a lesson for oneself.

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: