Approximate inference for continuous time Markov processes

author: Manfred Opper, Department of Artificial Intelligence, TU Berlin
published: Sept. 17, 2008,   recorded: May 2008,   views: 5743
Categories

Slides

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Delicious Bibliography

Description

Continuous time Markov processes (such as jump processes and diffusions) play an important role in the modelling of dynamical systems in many scientific areas. In a variety of applications, the stochastic state of the system as a function of time is not directly observed. One has only access to a set of nolsy observations taken at a discrete set of times. The problem is then to infer the unknown state path as best as possible. In addition, model parameters (like diffusion constants or transition rates) may also be unknown and have to be estimated from the data. While it is fairly straightforward to present a theoretical solution to these estimation problems, a practical solution in terms of PDEs or by Monte Carlo sampling can be very time consuming and one is looking for efficient approximations. I will discuss approximate solutions to this problem such as variational approximations to the probability measure over paths and weak noise expansions.

See Also:

Download slides icon Download slides: aispds08_opper_aict_01.pdf (424.7 KB)


Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: