What does the steady state represent to a Markov Chain?

I'm a little confused as to the interpretation of the steady state in the context of a Markov chain. I know Markov chains are memoryless, in that each state only depends on its immediate predecessor, but doesn't that mean the system is in a sort of steady state?


I think you are struggling because you are misunderstanding what we are intending to describe with the term steady state. You are correct, a Markov Chain is completely determined once it is defined. That means, no matter what step (or time) you are interested in, I can find the distributions for that step. This property is known as uniqueness.

Though, that is not what we mean by steady state, and we need to be clear when we say steady state because it can have two different meanings.

Meaning 1: There is a very deep relationship between stochastic processes and linear algebra. If you have not taken a linear algebra course that discussed both eigenvalues and eigenvectors, then this might be hard to understand.

A steady state is an eigenvector for a stochastic matrix. That is, if I take a probability vector and multiply it by my probability transition step matrix and get out the same exact probability vector, it was a steady state. In other words, nothing changed after the step. You got out out the same probabilities that you put in. For stochastic processes, we fix the eigenvalue at $\lambda=1$, which gives us unique eigenvectors.

Meaning 2: However, I think you might be talking about limiting distributions as they are sometimes called steady state distributions for markov chains. The idea of a steady state distribution is that we have reached (or converging to) a point in the process where the distributions will no longer change. The distributions for this step are equal to distributions for steps from hereon forward. So if we have reached the limiting distribution and you are at a state $i$, the probability you will "step" to an accessible state $j$ will be the same now and any time in the future. Steady states distributions are unique if they exist.

Things get interesting when you plug in the steady state for the limiting distribution.


One question we can ask is for the probabilities that the system will be found in each of the states at a given future time. In general, these probabilities will depend on which future time you are asking about. In many, but not all, Markov chains, however, the probability for a particular one of the states will approach a limiting value as time goes to infinity. In other words, in the far future, the probabilities won't be changing much from one transition to the next. These limiting values are called "stable probabilities".

If start off the system so that each state has probability equal to its stable probability, then these probabilities will persist for all time. The system will therefore be in a "steady state".

Some Markov chains do not have stable probabilities. For example, if the transition probabilities are given by the matrix $$ \begin{bmatrix}0 & 1\\1 & 0\end{bmatrix}, $$ and if the system is started off in State 1, then the probability of finding the system in State 1 will oscillate between 0 and 1 forever.