< Back to Home]

Time Series Prediction by Chaotic Modeling of Nonlinear Dynamical Systems


Downloads

·         Arslan Basharat, and Mubarak Shah, Time Series Prediction by Chaotic Modeling of Nonlinear Dynamical Systems, International Conference on Computer Vision (ICCV), Oct 2009, Kyoto, Japan [PDF]  

 

·         Videos [Results]


Abstract

We use concepts from chaos theory in order to model nonlinear dynamical systems that exhibit deterministic behavior. Observed time series from such a system can be embedded into a higher dimensional phase space without the knowledge of an exact model of the underlying dynamics. Such an embedding warps the observed data to a strange attractor, in the phase space, which provides precise information about the dynamics involved. We extract this information from the strange attractor and utilize it to predict future observations. Given an initial condition, the predictions in the phase space are computed through kernel regression. This approach has the advantage of modeling dynamics without making any assumptions about the exact form (linear, polynomial, radial basis, etc.) of the mapping function. The predicted points are then warped back to the observed time series. We demonstrate the utility of these predictions for human action synthesis, and dynamic texture synthesis. Our main contributions are: multivariate phase space reconstruction for human actions and dynamic textures, a deterministic approach to model dynamics in contrast to the popular noise-driven approaches for dynamic textures, and video synthesis from kernel regression in the phase space. Experimental results provide qualitative and quantitative analysis of our approach on standard data sets.

 


Proposed Approach

The aim of this paper is to investigate the relevant concepts from chaos theory and propose a novel and robust model for video synthesis. The novelty of this work lies in:

•       The formulation of phase space reconstruction from the multivariate time series data of human actions and dynamic textures. Previously [1], only univariate phase space models of human actions have been studied for action recognition.

•       A new deterministic dynamical model for dynamic textures in contrast to previously popular stochastic noise-driven dynamical systems [9, 24].

•       A new nonparametric model based on kernel regression in phase space.

 

 

We investigate dynamical systems that define the time evolution of underlying dynamics in a phase (or state) space. First task is to find a way for phase space reconstruction from times series. The time series observations {x0, x1, . . . , xt, . . .} are transformed to the phase space vectors {z0, z1, . . . , zt, . . .} through delay embedding, which is explained in Sec. 2.1. In the case of deterministic nonlinear dynamical (chaotic) systems, specifying a point in the phase space identifies the state of the system and vice versa. This implies that we can model the dynamics of a system by modeling the dynamics of the corresponding points in the phase space. This idea forms the foundation of modeling the underlying chaotic system of unknown form and predicting future states. A system state is defined by a vector zt ε Rn. The dynamics of these states are defined either by an n-dimensional mapping function zt+1 = F(zt), or by n first order differential equations. The latter approach is typically used for studying theoretical systems because the exact equations are rarely known for the experimental systems. The former approach, which is based on the mapping function, is more popular for the experimental systems. We have adopted a kernel regression based mapping function for predicting future system states. This mapping function successfully follows the training data without significant assumptions about the functional form of the function. We do assume a form of the underlying kernel though, but it’s choice is not as critical as that of the functional forms of the mapping function, e.g. polynomial, radial basis function, etc. These new predicted states from the mapping function are then transformed back as the output time series. We use this model for synthesis of human actions and dynamic textures in videos.

Results

Some of the videos of the results included in the paper are provided here.

Action Synthesis

·         Figure 5: Univariate vs. multivariate embedding and prediction

 

snapshot20090912171253

run1_original.avi

snapshot20090912171253

run1_synthesis_univariate.avi

snapshot20090912171253

run1_synthesis_multivariate.avi

 

 

·         Figure 7(c): Comparison with Gaussian Process Dynamical Models (GPDM)*

comparison_with_GPDM.wmv

 

* J. Wang, D. Fleet and A. Hertzmann. Gaussian process dynamical models for human motion. PAMI, 2008.

Dynamic Texture Synthesis

·         Figure 9: Univariate vs. multivariate embedding and prediction

 

flag_ univariate.avi

flag_ multivariate.avi

 

·         Figure 10: Synthesis on UCLA database

 

snapshot20090912171253

ucladb_synthesis.avi


[< Back to Home] [^ Back to Top]