Abstraction Selection in Model-based Reinforcement Learning

author: Nan Jiang, Department of Computer Science and Engineering, Michigan State University
published: Dec. 5, 2015,   recorded: October 2015,   views: 1936
Categories

See Also:

Download slides icon Download slides: icml2015_jiang_reinforcement_learning_01.pdf (340.7 KB)


Help icon Streaming Video Help

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Delicious Bibliography

Description

State abstractions are often used to reduce the complexity of model-based reinforcement learning when only limited quantities of data are available. However, choosing the appropriate level of abstraction is an important problem in practice. Existing approaches have theoretical guarantees only under strong assumptions on the domain or asymptotically large amounts of data, but in this paper we propose a simple algorithm based on statistical hypothesis testing that comes with a finite-sample guarantee under assumptions on candidate abstractions. Our algorithm trades off the low approximation error of finer abstractions against the low estimation error of coarser abstractions, resulting in a loss bound that depends only on the quality of the best available abstraction and is polynomial in planning horizon.

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Reviews and comments:

Comment1 bo liu, July 26, 2016 at 1:49 a.m.:

the video recorder is too careless to upload the slide ...

Write your own review or comment:

make sure you have javascript enabled or clear this field: