Joshua B. Tenenbaum
homepage:http://web.mit.edu/cocosci/josh.html
search externally:   Google Scholar,   Springer,   CiteSeer,   Microsoft Academic Search,   Scirus ,   DBlife

Description

My colleagues and I in the Computational Cognitive Science group study one of the most basic and distinctively human aspects of cognition: the ability to learn so much about the world, rapidly and flexibly. Given just a few relevant experiences, even young children can infer the meaning of a new word, the hidden properties of an object or substance, or the existence of a new causal relation or social rule. These inferences go far beyond the data given: after seeing three or four examples of "horses", a two-year-old will confidently judge whether any new entity is a horse or not, and she will be mostly correct, except for the occasional donkey or camel.

We want to understand these everyday inductive leaps in computational terms. What is the underlying logic that supports reliable generalization from so little data? What are its cognitive and neural mechanisms, and how can we build more powerful learning machines based on the same principles?

These questions demand a multidisciplinary approach. Our group's research combines computational models (drawing chiefly on Bayesian statistics, probabilistic generative models, and probabilistic programming) with behavioral experiments in adults and children. Our models make strong quantitative predictions about behavior, but more importantly, they attempt to explain why cognition works, by viewing it as an approximation to ideal statistical inference given the structure of natural tasks and environments.

While our core interests are in human learning and reasoning, we also work actively in machine learning and artificial intelligence. These two programs are inseparable: bringing machine-learning algorithms closer to the capacities of human learning should lead to more powerful AI systems as well as more powerful theoretical paradigms for understanding human cognition.

Current research in our group explores the computational basis of many aspects of human cognition: learning concepts, judging similarity, inferring causal connections, forming perceptual representations, learning word meanings and syntactic principles in natural language, noticing coincidences and predicting the future, inferring the mental states of other people, and constructing intuitive theories of core domains, such as intuitive physics, psychology, biology, or social structure.


Lectures:

invited talk
flag How to Grow a Mind: Statistics, Structure and Abstraction
as author at  24th Annual Conference on Neural Information Processing Systems (NIPS), Vancouver 2010,
47857 views
  invited talk
flag How to Grow a Mind: Statistics, Structure and Abstraction
as author at  AI for Good Foundation (AI4Good) "Artificial Intelligence to Help the World",
37169 views
invited talk
flag Bayesian models of human inductive learning
as author at  24th Annual International Conference on Machine Learning (ICML), Corvallis 2007,
27344 views
  tutorial
flag Machine Learning and Cognitive Science
as author at  Machine Learning Summer School (MLSS), Cambridge 2009,
12098 views
tutorial
flag What is cognitive science?
as author at  Cognitive Science and Machine Learning Summer School (MLSS), Sardinia 2010,
10878 views
  lecture
flag Two architectures for one-shot learning
as author at  NIPS Workshops, Lake Tahoe 2013,
3436 views
lecture
flag Finding structure in data
as author at  Cognitive Science and Machine Learning Summer School (MLSS), Sardinia 2010,
4756 views
  lecture
flag Baysian models and cognitive development
as author at  Cognitive Science and Machine Learning Summer School (MLSS), Sardinia 2010,
4622 views
lecture
flag Explorations in Language Learnability Using Probabilistic Grammars and Child-directed Speech
as author at  MIT World Series: Where Does Syntax Come From? Have We All Been Wrong?,
3771 views