The Neuroscience of Reinforcement Learning

author: Yael Niv, Princeton Neuroscience Institute, Princeton University
published: Aug. 26, 2009,   recorded: June 2009,   views: 23461
Categories

Slides

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Delicious Bibliography

Description

Overview and goals:

One of the most influential contributions of machine learning to understanding the human brain is the (fairly recent) formulation of learning in real world tasks in terms of the computational framework of reinforcement learning. This confluence of ideas is not limited to abstract ideas about how trial and error learning should proceed, but rather, current views regarding the computational roles of extremely important brain substances (such as dopamine) and brain areas (such as the basal ganglia) draw heavily from reinforcement learning. The results of this growing line of research stand to contribute not only to neuroscience and psychology, but also to machine learning: human and animal brains are remarkably adept at learning new tasks in an uncertain, dynamic and extremely complex world. Understanding how the brain implements reinforcement learning efficiently may suggest similar solutions to engineering and artificial intelligent problems. This tutorial will present the current state of the study of neural reinforcement learning, with an emphasis on both what it teaches us about the brain, and what it teaches us about reinforcement learning.

Target Audience: The target audience are researchers working in the field of reinforcement learning, who are interested in the current state-of-the-art of neuroscientific applications of this theoretical framework, as well as researchers working in related fields of machine learning such as engineering and robotics. Familiarity/basic knowledge of reinforcement learning (MDPs, dynamic programming, online temporal difference methods) will be assumed; basic knowledge in neuroscience or psychology will not.

Tutorial outline: Introduction: A coarse-grain overview of the brain and what we currently know about how it works Learning and decision making in animals and humans: is this really a reinforcement learning problem? Dopamine and prediction errors: what we know about dopamine, why we think it computes a temporal difference prediction error, and why should we care? Evidence for the prediction error hypothesis of dopamine Actor/Critic architectures in the basal ganglia: a distribution of functions in a learning network SARSA versus Q-learning: can dopamine reveal what algorithm the brain actually uses? Multiple learning systems in the brain: what is the evidence for both model based and model free reinforcement learning systems in the brain, why have more than one system, and how to arbitrate between them Beyond phasic dopamine: average reward reinforcement learning, tonic dopamine and the control of response vigor Risk and reinforcement learning: can the brain tell us something about learning of the variance of rewards? Open challenges and future directions: what more can reinforcement learning teach us about the brain, and where can we expect the brain to teach us about reinforcement learning?

See Also:

Download slides icon Download slides: icml09_niv_tnorl_01.pdf (9.1 MB)


Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: