Bayesian Time Series Modeling: Structured Representations for Scalability

author: Emily Fox, Department of Statistics, University of Washington
published: Dec. 5, 2015,   recorded: October 2015,   views: 4075
Categories

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Delicious Bibliography

 Watch videos:   (click on thumbnail to launch)

Watch Part 1
Part 1 1:05:23
!NOW PLAYING
Watch Part 2
Part 2 49:59
!NOW PLAYING

Description

Time series of increasing complexity are being collected in a variety of fields ranging from neuroscience, genomics, and environmental monitoring to e-commerce based on technologies and infrastructures previously unavailable. These datasets can be viewed either as providing a single, high-dimensional time series or as a massive collection of time series with intricate and possibly evolving relationships between them. For scalability, it is crucial to discover and exploit sparse dependencies between the data streams or dimensions. Such representational structures for independent data sources have been extensively explored in the machine learning community. However, in the conversation on big data, despite the importance and prevalence of time series, the question of how to analyze such data at scale has received limited attention and represents an area of research opportunities. For these time series of interest, there are two key modeling components: the dynamic and relational models, and their interplay. In this tutorial, we will review some foundational time series models, including the hidden Markov model (HMM) and vector autoregressive (VAR) process. Such dynamical models and their extensions have proven useful in capturing complex dynamics of individual data streams such as human motion, speech, EEG recordings, and genome sequences. However, a focus of this tutorial will be on how to deploy scalable representational structures for capturing sparse dependencies between data streams. In particular, we consider clustering, directed and undirected graphical models, and low-dimensional embeddings in the context of time series. An emphasis is on learning such structure from the data. We will also provide some insights into new computational methods for performing efficient inference in large-scale time series. Throughout the tutorial we will highlight Bayesian and Bayesian nonparametric approaches for learning and inference. Bayesian methods provide an attractive framework for examining complex data streams by naturally incorporating and propagating notions of uncertainty and enabling integration of heterogenous data sources; the Bayesian nonparametric aspect allows the complexity of the dynamics and relational structure to adapt to the observed data.

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: