Reading: J. P. Jarvis and D. R. Shier, "Graph-Theoretic Analysis of Finite Markov Chain."
from __future__ import division
import numpy as np
import quantecon as qe
Make sure that you have installed quantecon
version 0.1.6 (or above):
qe.__version__
'0.1.6'
Consider the Markov chain given by the following stochastic matrix (where the actual values of non-zero probabilities are not important):
P = np.zeros((6, 6))
P[0, 0] = 1
P[1, 4] = 1
P[2, [2, 3, 4]] = 1/3
P[3, [0, 5]] = 1/2
P[4, [1, 4]] = 1/2
P[5, [0, 3]] = 1/2
print (P)
[[ 1. 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 1. 0. ] [ 0. 0. 0.333 0.333 0.333 0. ] [ 0.5 0. 0. 0. 0. 0.5 ] [ 0. 0.5 0. 0. 0.5 0. ] [ 0.5 0. 0. 0.5 0. 0. ]]
Create a MarkovChain
instance:
mc0 = qe.MarkovChain(P)
We call the states $0, \ldots, 5$, respectively, instead of $1, \ldots, 6$.
(a) Determine the communication classes.
mc0.communication_classes
[array([0], dtype=int32), array([1, 4], dtype=int32), array([3, 5], dtype=int32), array([2], dtype=int32)]
(b) Classify the states of this Markov chain.
mc0.recurrent_classes
[array([0], dtype=int32), array([1, 4], dtype=int32)]
Obtain a list of the recurrent states.
# Write your own code
RS=np.array((0, 1, 4))
print(RS)
[0 1 4]
Obtain a list of the transient states.
# Write your own code
TS=np.array((2, 3, 5))
print(TS)
[2 3 5]
(c) Does the chain have a unique stationary distribution?
mc0.num_recurrent_classes
2
Obtain the stationary distributions:
vecs = mc0.stationary_distributions
print (vecs)
[[ 1. 0. 0. 0. 0. 0. ] [ 0. 0.33333333 0. 0. 0.66666667 0. ]]
Verify that the above vectors are indeed stationary distributions.
Hint: Use np.dot.
# Write your own code
svec1=(1,0,0,0,0,0)
svec2=(0,1/3,0,0,2/3,0)
np.dot(svec1,P)
np.dot(svec2,P)
print (np.dot(svec1,P))
print (np.dot(svec2,P))
[ 1. 0. 0. 0. 0. 0.] [ 0. 0.33333333 0. 0. 0.66666667 0. ]
Let us simulate our Markov chain mc0
.
The simualte
method generates a sample path from an initial state as specified by the init
argument,
of length specified by sample_size
, which is set to 1000 by default when omitted.
A sample path from state 0
:
mc0.simulate(init=0, sample_size=50)
array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
As is clear from the transition matrix P
,
if it starts at state 0
, the chain stays there forever,
i.e., 0
is an absorbing state, a state that constitutes a singleton recurrent class.
Start with state 1
:
mc0.simulate(init=1, sample_size=50)
array([1, 4, 1, 4, 1, 4, 1, 4, 4, 1, 4, 1, 4, 1, 4, 1, 4, 1, 4, 4, 1, 4, 1, 4, 4, 4, 1, 4, 1, 4, 1, 4, 4, 1, 4, 1, 4, 1, 4, 1, 4, 4, 4, 4, 1, 4, 1, 4, 1, 4])
You can observe that the chain stays in the recurrent class $\{1, 4\}$ and
visits states 1
and 4
with certain frequencies.
Let us compute the frequency distribution along a sample path. We will repeat this operation, so let us write a function for it.
def time_series_dist(mc, init, ts_length=100):
"""
Return a distribution of visits by a sample path of length ts_length
of mc with an initial state init.
"""
X = mc.simulate(init=init, sample_size=ts_length)
bins = np.arange(mc.n+1)
hist, bin_edges = np.histogram(X, bins=bins)
dist = hist/len(X)
return dist
Here is a frequency distribution along a sample path from initial state 1
:
time_series_dist(mc0, init=1)
array([ 0. , 0.35, 0. , 0. , 0.65, 0. ])
Let us visualize the distribution.
def draw_histogram(distribution,
ax=None, title=None, xlabel=None, ylabel=None, ylim=(0, 1)):
"""
Plot the given distribution.
"""
if ax is None:
fig, ax = plt.subplots()
n = len(distribution)
ax.bar(np.arange(n), distribution, align='center')
ax.set_xlim(-0.5, (n-1)+0.5)
ax.set_ylim(*ylim)
if title:
ax.set_title(title)
if xlabel:
ax.set_xlabel(xlabel)
if ylabel:
ax.set_ylabel(ylabel)
if ax is None:
plt.show()
%matplotlib inline
import matplotlib.pyplot as plt
init = 1
draw_histogram(time_series_dist(mc0, init=init),
title='Time series distribution with init={0}'.format(init),
xlabel='States')
plt.show()
Observe that the distribution is close to the stationary distribution
[0, 1/3, 0, 0, 2/3, 0]
.
Start with state 2
:
init = 2
draw_histogram(time_series_dist(mc0, init=init),
title='Time series distribution with init={0}'.format(init),
xlabel='States')
plt.show()
Run the above (or below) cell several times; you will observe that the distribution differs across sample paths. Sometimes the state is absorbed into the recurrent class $\{0\}$, while other times it is absorbed into the recurrent class $\{1, 4\}$.
init = 2
fig, axes = plt.subplots(1, 2, figsize=(8, 3))
titles = ['Some sample path', 'Another sample path']
titles = [title + ' init={0}'.format(init) for title in titles]
for ax, title in zip(axes, titles):
draw_histogram(time_series_dist(mc0, init=init), ax=ax, title=title, xlabel='States')
fig.suptitle('Time series distributions', y=-0.05, fontsize=12)
plt.show()
Let us repeat the simulation for many times (say, 100 times)
and obtain the distribution of visits to each states
at a given time period T
.
That is, we want to simulate the marginal distribution at time T
.
def cross_sectional_dist(mc, init, T, num_reps=100):
"""
Return a distribution of visits at time T across num_reps sample paths
of mc with an initial state init.
"""
x = np.empty(num_reps, dtype=int)
for i in range(num_reps):
x[i] = mc.simulate(init=init, sample_size=T+1)[-1]
bins = np.arange(mc.n+1)
hist, bin_edges = np.histogram(x, bins=bins)
dist = hist/len(x)
return dist
init = 2
T = 100
draw_histogram(cross_sectional_dist(mc0, init=init, T=T),
title='Empirical marginal distribution ' + \
'at T={0} with init={1}'.format(T, init))
plt.show()
Observe that the distribution is close to a convex combination of
the stationary distributions [1, 0, 0, 0, 0, 0]
and [0, 1/3, 0, 0, 2/3, 0]
.
Let us simulate with the remaining states, 3, 4, and 5.
inits = [3, 4, 5]
fig, axes = plt.subplots(1, 3, figsize=(12, 3))
for init, ax in zip(inits, axes):
draw_histogram(time_series_dist(mc0, init=init), ax=ax,
title='Initial state = {0}'.format(init),
xlabel='States')
fig.suptitle('Time series distributions', y=-0.05, fontsize=12)
plt.show()
Plot empirical marginal distributions at T=100 with initial states 3, 4, and 5.
# Write your own code
inits = [3, 4, 5]
T = 100
fig, axes = plt.subplots(1, 3, figsize=(12, 3))
for init, ax in zip(inits, axes):
draw_histogram(cross_sectional_dist(mc0, init=init, T=T), ax=ax,
title='Initial state = {0}'.format(init),
xlabel='States')
fig.suptitle('Empirical marginal distribution', y=-0.05, fontsize=12)
plt.show()
The marginal distrubtions at time $T$ are obtained by $P^T$.
np.set_printoptions(suppress=True) # Suppress printing with floating point notation
T = 10
print ('P^{0} ='.format(T))
print (np.linalg.matrix_power(mc0.P, T))
P^10 = [[ 1. 0. 0. 0. 0. 0. ] [ 0. 0.33398438 0. 0. 0.66601562 0. ] [ 0.49807228 0.16679179 0.00001694 0.0007677 0.33319974 0.00115155] [ 0.99902344 0. 0. 0.00097656 0. 0. ] [ 0. 0.33300781 0. 0. 0.66699219 0. ] [ 0.99902344 0. 0. 0. 0. 0.00097656]]
T = 100
print ('P^{0} ='.format(T))
print (np.linalg.matrix_power(mc0.P, T))
P^100 = [[ 1. 0. 0. 0. 0. 0. ] [ 0. 0.33333333 0. 0. 0.66666667 0. ] [ 0.5 0.16666667 0. 0. 0.33333333 0. ] [ 1. 0. 0. 0. 0. 0. ] [ 0. 0.33333333 0. 0. 0.66666667 0. ] [ 1. 0. 0. 0. 0. 0. ]]
Compare the rows with the stationary distributions obtained by mc0.stationary_distributions
.
Note that mc0
is aperiodic
(i.e., the least common multiple of the periods of the recurrent class is one),
so that $P^T$ converges as $T \to \infty$.
mc0.period
1
Consider the Markov chain given by the following stochastic matrix (where the actual values of non-zero probabilities are not important):
P = np.zeros((10, 10))
P[0, 3] = 1
P[1, [0, 4]] = 1/2
P[2, 6] = 1
P[3, [1, 2, 7]] = 1/3
P[4, 3] = 1
P[5, 4] = 1
P[6, 3] = 1
P[7, [6, 8]] = 1/2
P[8, 9] = 1
P[9, 5] = 1
np.set_printoptions(precision=3) # Reduce the number of digits printed
print (P)
[[ 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. ] [ 0.5 0. 0. 0. 0.5 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. ] [ 0. 0.333 0.333 0. 0. 0. 0. 0.333 0. 0. ] [ 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. ] [ 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. 0. 0.5 0. 0.5 0. ] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. ] [ 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. ]]
mc1 = qe.MarkovChain(P)
We call the states $0, \ldots, 9$, respectively, instead of $1, \ldots, 10$.
(a) Check that this Markov chain is irreducible.
mc1.is_irreducible
True
(c) Determine the period of this Markov chain.
mc1.period
3
(d) Identify the cyclic classes.
mc1.cyclic_classes
[array([0, 4, 6, 8], dtype=int32), array([3, 9], dtype=int32), array([1, 2, 5, 7], dtype=int32)]
Let us discuss this exercise using the Markov chain from Exercise 9.
print (mc1.P)
[[ 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. ] [ 0.5 0. 0. 0. 0.5 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. ] [ 0. 0.333 0.333 0. 0. 0. 0. 0.333 0. 0. ] [ 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. ] [ 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. 0. 0.5 0. 0.5 0. ] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. ] [ 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. ]]
Denote the period of $P$ by $d$:
d = mc1.period
Reorder the states so that the transition matrix is in block form:
permutation = np.concatenate(mc1.cyclic_classes)
P = mc1.P[permutation, :][:, permutation]
print (P)
[[ 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. 0. 0.333 0.333 0. 0.333] [ 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. ] [ 0.5 0.5 0. 0. 0. 0. 0. 0. 0. 0. ] [ 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. ] [ 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. ] [ 0. 0. 0.5 0.5 0. 0. 0. 0. 0. 0. ]]
Let us create a new MarkovChain instance:
mc2 = qe.MarkovChain(P)
Obtain the block components $P_0, \ldots, P_d$:
P_blocks = []
for i in range(d):
P_blocks.append(mc2.P[mc2.cyclic_classes[i%d], :][:, mc2.cyclic_classes[(i+1)%d]])
print ('P_{0} ='.format(i))
print (P_blocks[i])
P_0 = [[ 1. 0.] [ 1. 0.] [ 1. 0.] [ 0. 1.]] P_1 = [[ 0.333 0.333 0. 0.333] [ 0. 0. 1. 0. ]] P_2 = [[ 0.5 0.5 0. 0. ] [ 0. 0. 1. 0. ] [ 0. 1. 0. 0. ] [ 0. 0. 0.5 0.5]]
(b) Show that $P^d$ is block diagonal.
Hint: You may use np.linalg.matrix_power.
# Write your own code
from numpy import linalg as LA
P_power_d=LA.matrix_power(P,d)
print (P_power_d)
[[ 0.167 0.167 0.5 0.167 0. 0. 0. 0. 0. 0. ] [ 0.167 0.167 0.5 0.167 0. 0. 0. 0. 0. 0. ] [ 0.167 0.167 0.5 0.167 0. 0. 0. 0. 0. 0. ] [ 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0.833 0.167 0. 0. 0. 0. ] [ 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. 0. 0.333 0.333 0. 0.333] [ 0. 0. 0. 0. 0. 0. 0.333 0.333 0. 0.333] [ 0. 0. 0. 0. 0. 0. 0.333 0.333 0. 0.333] [ 0. 0. 0. 0. 0. 0. 0.167 0.167 0.5 0.167]]
(c) Verify that the $i$th diagonal block of $P^d$ equals $P_i P_{i+1} \ldots P_{d-1} P_0 \ldots P_{i-1}$.
Store these diagonal blocks in a list called P_power_d_blocks
.
# Write your own code
P_power_d_0=np.array([P_power_d[0,[0,1,2,3]],P_power_d[1,[0,1,2,3]],P_power_d[2,[0,1,2,3]],P_power_d[3,[0,1,2,3]]])
P_power_d_1=np.array([P_power_d[4,[4,5]],P_power_d[5,[4,5]]])
P_power_d_2=np.array([P_power_d[6,[6,7,8,9]],P_power_d[7,[6,7,8,9]],P_power_d[8,[6,7,8,9]],P_power_d[9,[6,7,8,9]]])
P_power_d_blocks = [P_power_d_0,P_power_d_1,P_power_d_2]
for i in range(d):
print ('P_power_d_{0} ='.format(i))
print (P_power_d_blocks[i])
P_power_d_0 = [[ 0.167 0.167 0.5 0.167] [ 0.167 0.167 0.5 0.167] [ 0.167 0.167 0.5 0.167] [ 0. 1. 0. 0. ]] P_power_d_1 = [[ 0.833 0.167] [ 1. 0. ]] P_power_d_2 = [[ 0.333 0.333 0. 0.333] [ 0.333 0.333 0. 0.333] [ 0.333 0.333 0. 0.333] [ 0.167 0.167 0.5 0.167]]
(d) Obtain the stationary distributions each associated with the diagonal blocks of $P^d$.
Store them in a list called pi_s
.
# Write your own code
mc_0 = qe.MarkovChain(P_power_d_0)
mc_1 = qe.MarkovChain(P_power_d_1)
mc_2 = qe.MarkovChain(P_power_d_2)
pi_s_0 = mc_0.stationary_distributions
pi_s_1 = mc_1.stationary_distributions
pi_s_2 = mc_2.stationary_distributions
pi_s = [pi_s_0,pi_s_1,pi_s_2]
for i in range(d):
print ('pi_s_{0} ='.format(i))
print (pi_s[i])
[array([[ 0.143, 0.286, 0.429, 0.143]]), array([[ 0.857, 0.143]]), array([[ 0.286, 0.286, 0.143, 0.286]])] pi_s_0 = [[ 0.143 0.286 0.429 0.143]] pi_s_1 = [[ 0.857 0.143]] pi_s_2 = [[ 0.286 0.286 0.143 0.286]]
Verify that $\pi^{i+1} = \pi^i P_i$.
# Write your own code
np.dot(pi_s_0,P_0)
np.dot(pi_s_1,P_1)
np.dot(pi_s_2,P_2)
print (np.dot(pi_s_0,P_0))
print (np.dot(pi_s_1,P_1))
print (np.dot(pi_s_2,P_2))
[[ 0.399 0.131 0.016 0.052]] [[ 0.536 0. ]] [[ 0.286 0.286 0.143 0.286]]
Obtain the unique stationary distribution $\pi$ of the original Markov chain.
print (mc2.stationary_distributions[0])
[ 0.048 0.095 0.143 0.048 0.286 0.048 0.095 0.095 0.048 0.095]
Verify that $\pi = (\pi^0, \ldots, \pi^d) / d$.
# Write your own code
--------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-149-be7af97bf0ff> in <module>() 1 # Write your own code ----> 2 pi_s/d ValueError: could not broadcast input array from shape (4) into shape (1)
(e)
# Write your answer in a Markdown mode, with providing code if necessary.
Print ($P^1, P^2, \ldots, P^d$.)
# Write your own code
from numpy import linalg as LA
for i in range(d):
j=i+1
P_power_j=LA.matrix_power(P,j)
print ('P_power_{0} ='.format(j))
print(P_power_j)
P_power_1 = [[ 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. 0. 0.333 0.333 0. 0.333] [ 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. ] [ 0.5 0.5 0. 0. 0. 0. 0. 0. 0. 0. ] [ 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. ] [ 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. ] [ 0. 0. 0.5 0.5 0. 0. 0. 0. 0. 0. ]] P_power_2 = [[ 0. 0. 0. 0. 0. 0. 0.333 0.333 0. 0.333] [ 0. 0. 0. 0. 0. 0. 0.333 0.333 0. 0.333] [ 0. 0. 0. 0. 0. 0. 0.333 0.333 0. 0.333] [ 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. ] [ 0.167 0.167 0.5 0.167 0. 0. 0. 0. 0. 0. ] [ 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0.5 0.5 0. 0. 0. 0. ]] P_power_3 = [[ 0.167 0.167 0.5 0.167 0. 0. 0. 0. 0. 0. ] [ 0.167 0.167 0.5 0.167 0. 0. 0. 0. 0. 0. ] [ 0.167 0.167 0.5 0.167 0. 0. 0. 0. 0. 0. ] [ 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0.833 0.167 0. 0. 0. 0. ] [ 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. 0. 0.333 0.333 0. 0.333] [ 0. 0. 0. 0. 0. 0. 0.333 0.333 0. 0.333] [ 0. 0. 0. 0. 0. 0. 0.333 0.333 0. 0.333] [ 0. 0. 0. 0. 0. 0. 0.167 0.167 0.5 0.167]]
Print ($P^{2d}$, $P^{4d}$, and $P^{6d}$.)
# Write your own code
from numpy import linalg as LA
P_power_2d=LA.matrix_power(P,2*d)
P_power_4d=LA.matrix_power(P,4*d)
P_power_6d=LA.matrix_power(P,6*d)
print ('P_power_2d')
print(P_power_2d)
print ('P_power_4d')
print(P_power_4d)
print ('P_power_6d')
print(P_power_6d)
P_power_2d [[ 0.139 0.306 0.417 0.139 0. 0. 0. 0. 0. 0. ] [ 0.139 0.306 0.417 0.139 0. 0. 0. 0. 0. 0. ] [ 0.139 0.306 0.417 0.139 0. 0. 0. 0. 0. 0. ] [ 0.167 0.167 0.5 0.167 0. 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0.861 0.139 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0.833 0.167 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. 0. 0.278 0.278 0.167 0.278] [ 0. 0. 0. 0. 0. 0. 0.278 0.278 0.167 0.278] [ 0. 0. 0. 0. 0. 0. 0.278 0.278 0.167 0.278] [ 0. 0. 0. 0. 0. 0. 0.306 0.306 0.083 0.306]] P_power_4d [[ 0.143 0.286 0.428 0.143 0. 0. 0. 0. 0. 0. ] [ 0.143 0.286 0.428 0.143 0. 0. 0. 0. 0. 0. ] [ 0.143 0.286 0.428 0.143 0. 0. 0. 0. 0. 0. ] [ 0.144 0.282 0.431 0.144 0. 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0.857 0.143 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0.856 0.144 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. 0. 0.285 0.285 0.144 0.285] [ 0. 0. 0. 0. 0. 0. 0.285 0.285 0.144 0.285] [ 0. 0. 0. 0. 0. 0. 0.285 0.285 0.144 0.285] [ 0. 0. 0. 0. 0. 0. 0.286 0.286 0.141 0.286]] P_power_6d [[ 0.143 0.286 0.429 0.143 0. 0. 0. 0. 0. 0. ] [ 0.143 0.286 0.429 0.143 0. 0. 0. 0. 0. 0. ] [ 0.143 0.286 0.429 0.143 0. 0. 0. 0. 0. 0. ] [ 0.143 0.286 0.429 0.143 0. 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0.857 0.143 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0.857 0.143 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. 0. 0.286 0.286 0.143 0.286] [ 0. 0. 0. 0. 0. 0. 0.286 0.286 0.143 0.286] [ 0. 0. 0. 0. 0. 0. 0.286 0.286 0.143 0.286] [ 0. 0. 0. 0. 0. 0. 0.286 0.286 0.143 0.286]]
Print ($P^{10d + 1}, \ldots, P^{10d + d}$.)
# Write your own code
from numpy import linalg as LA
for i in range(d):
P_power_10di=LA.matrix_power(P,10*d+i)
print ('P_power_{0} ='.format(10*d+i))
print(P_power_10di)
P_power_30 = [[ 0.143 0.286 0.429 0.143 0. 0. 0. 0. 0. 0. ] [ 0.143 0.286 0.429 0.143 0. 0. 0. 0. 0. 0. ] [ 0.143 0.286 0.429 0.143 0. 0. 0. 0. 0. 0. ] [ 0.143 0.286 0.429 0.143 0. 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0.857 0.143 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0.857 0.143 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. 0. 0.286 0.286 0.143 0.286] [ 0. 0. 0. 0. 0. 0. 0.286 0.286 0.143 0.286] [ 0. 0. 0. 0. 0. 0. 0.286 0.286 0.143 0.286] [ 0. 0. 0. 0. 0. 0. 0.286 0.286 0.143 0.286]] P_power_31 = [[ 0. 0. 0. 0. 0.857 0.143 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0.857 0.143 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0.857 0.143 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0.857 0.143 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. 0. 0.286 0.286 0.143 0.286] [ 0. 0. 0. 0. 0. 0. 0.286 0.286 0.143 0.286] [ 0.143 0.286 0.429 0.143 0. 0. 0. 0. 0. 0. ] [ 0.143 0.286 0.429 0.143 0. 0. 0. 0. 0. 0. ] [ 0.143 0.286 0.429 0.143 0. 0. 0. 0. 0. 0. ] [ 0.143 0.286 0.429 0.143 0. 0. 0. 0. 0. 0. ]] P_power_32 = [[ 0. 0. 0. 0. 0. 0. 0.286 0.286 0.143 0.286] [ 0. 0. 0. 0. 0. 0. 0.286 0.286 0.143 0.286] [ 0. 0. 0. 0. 0. 0. 0.286 0.286 0.143 0.286] [ 0. 0. 0. 0. 0. 0. 0.286 0.286 0.143 0.286] [ 0.143 0.286 0.429 0.143 0. 0. 0. 0. 0. 0. ] [ 0.143 0.286 0.429 0.143 0. 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0.857 0.143 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0.857 0.143 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0.857 0.143 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0.857 0.143 0. 0. 0. 0. ]]
Let us simulate the Markov chain mc2
.
Plot the frequency distribution of visits to the states along a sample path starting at state 0:
init = 0
draw_histogram(time_series_dist(mc2, init=init),
title='Time series distribution with init={0}'.format(init),
xlabel='States', ylim=(0, 0.35))
plt.show()
Observe that the distribution is close to the (unique) stationary distribution.
Next, plot the simulated marginal distributions at $T = 10d + 1, \ldots, 11d, 11d + 1, \ldots, 12d$ with initial state 0:
init = 0
T = 10 * d + 1
fig, axes = plt.subplots(2, d, figsize=(12, 6))
for t, ax in enumerate(axes.flatten()):
draw_histogram(cross_sectional_dist(mc2, init=init, T=T+t), ax=ax,
title='T = {0}'.format(T+t))
fig.suptitle('Empirical marginal distributions with init={0}'.format(init), y=0.05, fontsize=12)
plt.show()
Compare the rows of $P^{10d + 1}, \ldots, P^{10d + d}$.