Finite horizon exploration for path integral control problems
published: Feb. 25, 2007, recorded: December 2006, views: 4835
Slides
Related content
Report a problem or upload files
If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Description
We have recently developed a path integral method for solving a class of non-linear stochastic control problems in the continuous domain [1, 2]. Path integral (PI) control can be applied for timedependent finite-horizon tasks (motor control, coordination between agents) and static tasks (which behave similar to discounted reward reinforcement learning). In this control formalism, the cost-togo or value function can be solved explicitly as a function of the environment and rewards (as a path integral). Thus, for PI control one does not need to solve the Bellman equation. The computation of the path integral can also be complex, but one can use methods and concepts from statistical physics, such as Monte Carlo sampling or the Laplace approximation to obtain efficient approximations. One can also generalize this control formalism to multiple agents that jointly solve a task. In this case the agents need to coordinate their actions not only through time, but also among each other. It was recently shown that the problem can be mapped on a graphical model inference problem and can be solved using the junction tree algorithm. Exact control solutions can be computed for instance with hundreds of agents, depending on the complexity of the cost function [3].
Link this page
Would you like to put a link to this lecture on your homepage?Go ahead! Copy the HTML snippet !
Write your own review or comment: