How to Grow a Mind: Statistics, Structure and Abstraction
published: Aug. 17, 2012, recorded: July 2012, views: 37169
Slides
Related content
Report a problem or upload files
If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Description
The fields of cognitive science and artificial intelligence grew up together, with the twin goals of understanding human minds and making machines smarter in more humanlike ways. Yet since the 1980s they have mostly grown apart, as cognitive scientists came to see AI as too focused on applications and technical engineering issues rather than big questions of intelligence, while AI researchers came to see cognitive science as too informal and concerned with peculiarities of human minds and brains rather than general principles. Just in the last few years, however, these fields appear poised to reconverge in exciting and deep ways. Cognitive scientists have begun to adopt the toolkit of modern probabilistic AI as a unifying framework for modeling natural intelligence, while many AI researchers are looking beyond immediate applications to some of the big picture questions that originally motivated the field, and both communities are increasingly aware of and even informed by the other's moves in these directions.
This talk will describe recent work at the center of the convergence: computational accounts of human intelligence that both draw on and advance state-of-the-art AI. I will focus on capacities for which even young children still far surpass machines: learning from very few examples, and common sense reasoning about the physical and social world. These abilities can be explained as approximate forms of probabilistic (Bayesian) inference over richly structured models — probabilistic models built on top of knowledge representations familiar from earlier, classic AI days, such as graphs, grammars, schemas, predicate logic, and functional programs. In many cases, sampling-based approximate inference with these models can be surprisingly tractable and can predict human judgments with high quantitative accuracy. Extended in a hierarchical nonparametric Bayesian framework, these models can explain how children learn to learn, bootstrapping adult-like intelligence from more primitive foundations. Using probabilistic programming languages, these models can be integrated into a unified cognitive architecture. Throughout the talk I will present concrete examples, along with a few more speculative predictions, of how these cognitive modeling efforts can inform the development of more intelligent machine systems.
Link this page
Would you like to put a link to this lecture on your homepage?Go ahead! Copy the HTML snippet !
Reviews and comments:
A very useful video! Thanks for the work you've done! You have a lot to learn! I recently checked my IQ at https://iq-global-test.com/ and I have good results! Thanks to you!
Thank you for sharing this. I’ve been reading stuff lately form forums to forums. Trying to learn new things. Hopefully in this forum could teach me more stuff. I’ve been searching for more. While I was searching I found this [url=https://among-us.io/]consists of game info and download[/url] And also this [url=https://chrome.google.com/webstore/detail/roblox-on-laptop-pc/mckfhicmgpkjgchildjdaiooedjnbmig]roblox on pc[/url]
Thank you for the great video!
Fantastic, I think it's the greatest one out there, so tell me what you think! Best of luck! https://fnafpro.io
Write your own review or comment: