Stochastic Processes: Markov Chains [closed]

I would like to know how to solve question 3

enter image description here


Solution 1:

  1. The following figure shows the Markov chain.

enter image description here

with the following transition matrix (with $p=0.5$):

enter image description here

  1. Simulation of 2 simple paths using the transition probability matrix with python (notice that python has $0$-based indexing):

    import networkx as nx
    import numpy as np
    import matplotlib.pylab as plt
    
    p = 0.5
    g = nx.DiGraph()
    
    for i in range(1,22):
        g.add_node(i)
    
    # add all the edges to the graph with the transition probs
    g.add_edge(1, 2, weight=1)
    g.add_edge(2, 1, weight=1)
    # ... ...
    
    # obtain the Markov transition matrix       
    T =  nx.adjacency_matrix(g).todense()
    
    n_steps = 200
    n_paths = 2
    plt.figure(figsize=(15,5))
    plt.ylim(15, 25)
    for _ in range(n_paths):
       s = 21
       chain = [s]
       for _ in range(n_steps):
          nbrs = nx.neighbors(g, s)
          nbrs = list(nbrs)
          # sample from the neighbor states, with probability proportional to transition
          ps = []
          for nbr in nbrs:
             ps.append(T[s-1, nbr-1])
          s = np.random.choice(nbrs, 1, p=ps)[0]
          chain.append(s)
       plt.plot(range(n_steps+1), chain)
       plt.show()
    

If the initial state is in the vertex subset $\{19,20,21\}$, a simple path starting at 21 can never leave the subgraph formed by these 3 vertices, as can be seen from the next figure (since the component is absorbing, no outgoing edge present).

enter image description here

  1. Since for any $p \geq 0$ the Markov chain is transient (if it leaves the vertex subset $V\setminus\{19,20,21\}$, it can never return), since there is no outgoing path from the connected component $\{19,20,21\}$, $P(X(5))=0$. In the long run, it will be equiprobable to stay in any of these 3 states, the intuition is supported by the following power-iteration:

    initial = np.zeros(21)
    initial[19] = initial[20] = 1/2
    # power iteration
    vec = initial
    for i in range(100):
        vec = vec@T
    vec
    # [[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,  0., 0., 0., 
        0.33333333, 0.33333333, 0.33333333]]
    

    We can theoretically prove it too. In order to prove that the steady state probabilities are exactly same for each vertices in the isolated subgraph $\{19,20,21\}$, you could solve the linear system $\pi = \pi P$, with

    enter image description here

    that reduces to system of linear equations $\pi_{19}=\frac{1}{2}\pi_{20}+\frac{1}{2}\pi_{19}$, $\pi_{21}=\frac{1}{2}\pi_{20}+\frac{1}{2}\pi_{21}$ and $\pi_{20}=\frac{1}{2}\pi_{19}+\frac{1}{2}\pi_{21}$, which leads to the unique solution $\pi_{19}=\pi_{20}=\pi_{21}$, which along with $\pi_{19}+\pi_{20}+\pi_{21}=1$ gives the desired result $\pi_{19}=\pi_{20}=\pi_{21}=\frac{1}{3}$.

    Since the stationary distribution is independent of the initial state, by Perron–Frobenius theorem, for the connected aperiodic subgraph, we can obtain the steady state vector at stationary distribution by computing the normalized right eigenvector of $P$ corresponding to the eigenvalue $\lambda=1$:

    Λ, V = np.linalg.eig(np.array([[1/2,1/2,0],[1/2,0,1/2],[0,1/2,1/2]]).T)
    Λ
    # array([-0.5,  0.5,  1. ])
    V = V[:,Λ==1]
    V / sum(V)
    # array([[0.33333333],
    #        [0.33333333],
    #        [0.33333333]])
    

Solution 2:

As said by Sandipan: If you are in states $19, 20$ or $21$, there is no way to go to any other states. For these 3 states, you have the transition matrix $$A=\begin{pmatrix}\frac12&\frac12&0\\ \frac12&0&\frac 12 \\ 0 & \frac12 & \frac12\end{pmatrix}.$$ (Formally, you can go from the original chain to this $3$-state chain and back using properties of block matrices.) The matrix $A$ is symmetric, so it is unitarily diagonalizable (cf. page 307 of [1]).

Using the standard diagonalization procedure (cf. page 236f. of [1]), we get $$A = S D S^{-1},$$ where $$S=\left(\begin{array}{ccc} 1 & -1 & 1 \\ -2 & 0 & 1 \\ 1 & 1 & 1 \\\end{array}\right)$$ and $$D=\left(\begin{array}{ccc} -\frac{1}{2} & 0 & 0 \\ 0 & \frac{1}{2} & 0 \\ 0 & 0 & 1 \\\end{array}\right)$$ and $$S^{-1}=\left(\begin{array}{ccc} \frac{1}{6} & -\frac{1}{3} & \frac{1}{6} \\ -\frac{1}{2} & 0 & \frac{1}{2} \\ \frac{1}{3} & \frac{1}{3} & \frac{1}{3} \\\end{array}\right).$$

Indeed we also notice that with respect to any norm on the space of $3\times 3$ matrices, $D^n\to\operatorname{diag}(0,0,1)$ for $n\to\infty$, so also as mentioned by Sandipan, as $n\to\infty$, we have $$A^n\to S\operatorname{diag}(0,0,1) S^{-1}=\begin{pmatrix}\frac 13&\frac 13&\frac 13\\\frac 13&\frac 13&\frac 13\\\frac 13&\frac 13&\frac 13\end{pmatrix}.$$

Multiplying this with any $$\begin{pmatrix}a\\b\\c\end{pmatrix}\in\mathbb R^3$$ gives $$\frac{a+b+c}3\begin{pmatrix}1\\1\\1\end{pmatrix}.$$ So indeed if we start with any probability measure on the states $19,20,21$ (i.e. $a+b+c=1$ and $a,b,c\in[0,1]$), we will end up asymptotically in equilibrium through this Markov chain.


Furthermore this gives an easy answer to question 3, you just need to compute $$S D^5 S^{-1} \begin{pmatrix}\frac12\\0\\\frac12\end{pmatrix}.$$

Literature

[1]: Gerd Fischer, Lineare Algebra, edition 17 (2009).