Approximate inference for continuous time Markov processes
published: Sept. 17, 2008, recorded: May 2008, views: 5743
Slides
Related content
Report a problem or upload files
If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Description
Continuous time Markov processes (such as jump processes and diffusions) play an important role in the modelling of dynamical systems in many scientific areas. In a variety of applications, the stochastic state of the system as a function of time is not directly observed. One has only access to a set of nolsy observations taken at a discrete set of times. The problem is then to infer the unknown state path as best as possible. In addition, model parameters (like diffusion constants or transition rates) may also be unknown and have to be estimated from the data. While it is fairly straightforward to present a theoretical solution to these estimation problems, a practical solution in terms of PDEs or by Monte Carlo sampling can be very time consuming and one is looking for efficient approximations. I will discuss approximate solutions to this problem such as variational approximations to the probability measure over paths and weak noise expansions.
Link this page
Would you like to put a link to this lecture on your homepage?Go ahead! Copy the HTML snippet !
Write your own review or comment: