Commonsense Intelligence: Cracking the Longstanding Challenge in AI

author: Yejin Choi, University of Washington
published: May 3, 2021,   recorded: April 2021,   views: 12
Categories

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Delicious Bibliography

Description

Despite considerable advances in deep learning, AI remains to be narrow and brittle. One fundamental limitation is its lack of common sense: intuitive reasoning about everyday situations and events, which in turn, requires a wide spectrum of commonsense knowledge about how the physical and social world works, ranging from naive physics to folk psychology to ethical norms. In this talk, I will share our recent adventures in modeling neuro-symbolic commonsense models by melding symbolic and declarative knowledge stored in large-scale commonsense graphs with neural and implicit knowledge stored in large-scale neural language models. I will conclude the talk by discussing the needs for departing from the currently prevalent learning paradigms that lead to task- or even dataset-specific learning, and open research questions for commonsense AI in light of human cognition.

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: