Markovian dynamics

In the PACE project we have investigated the consequences of the reduction scheme outlined above [1,2,3]. Especially we have focused on reduction through aggregation of variables in large scale Markov models [4,5,6,7].  A Markov chain is a linear dynamical system in the probability space over the states. Reduction of the number of degrees of freedom in a linear system is straight forward, usually achieved by diagonalizing the matrix describing the dynamics (or, more generally, by transforming it into its Jordan canonical form). For a Markov chain, as well as in some other situations such as complex networks, such reductions are not interesting since the variables have special meaning, whereas linear combinations of them do not. In a Markov chain, for example, reduction by diagonalizing the transition matrix does not reduce the number of (as opposed to dimensionality of the probability space) states in the state space. In these cases, reduction generally means that variables or states are aggregated into macro-states, where each state or variable is in exactly one macro-state.  This restriction makes aggregation less straight forward. Aggregation of variables or states in linear dynamics, primarily in Markov chains, is of direct interest in a wide range of applications, such as transfer matrix methods in statistical physics, population dynamics, network performance analysis, Markov decision processes, economics, coarse-graining cellular automata, reduction and modularity in complex networks, and chemical reaction kinetics. 


Aggregation of states can also be used for dimensional reduction in deterministic and stochastic (finite dimensional) dynamical systems. To realize this the dynamics must be re-expressed in terms of the corresponding linear time evolution operators on the probability space, i.e. the Liouville respective Fokker-Planck operators. In this formulation the time evolution is a Markov process (with infinite state space). Splitting the phase space into boxes shows that dimensional reduction means aggregation of states in the Markov process. For deterministic systems this leads to a slight generalization of the “standard'' reduction through invariants of the motion, or foliations of the phase space [3]. 


Previous methods for aggregating variables in linear systems or states in Markov chains are typically based on apparent symmetries in the transition matrix or times-scale separation. If no such symmetries or separation exist, the problem of identifying aggregated Markov chains becomes less straightforward. A naive approach, where different potential aggregations are systematically tested, is clearly not practical even for a moderate number of variables or states, since the number of possible partitions grows exponentially or worse [5]. Our new result [6,7,8] provides a criterion that can be directly used for designing an effective algorithm for identifying aggregations of linear dynamical systems in general, and Markov chains in particular.