Linear Bellman Equations: Theory and Applications
published: Jan. 19, 2010, recorded: December 2009, views: 1250
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
I will provide a brief overview of a class stochastic optimal control problems recently developed by our group as well as by Bert Kappen's group. This problem class is quite general and yet has a number of unique properties, including linearity of the exponentially-transformed (Hamilton-Jacobi) Bellman equation, duality with Bayesian inference, convexity of the inverse optimal control problem, compositionality of optimal control laws, path-integral representation of the exponentially-transformed value function. I will then focus on function approximation methods that exploit the linearity of the Bellman equation, and illustrate how such methods scale to high-dimensional continuous dynamical systems. Computing the weights for a fixed set of basis functions can be done very efficiently by solving a large but sparse linear problem. This enables us to work with hundreds of millions of (localized) bases. Still, the volume of a high-dimensional state space is too large to be filled with localized bases, forcing us to consider adaptive methods for positioning and shaping those bases. Several such methods will be compared.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !