np.set_printoptions(suppress=True)
What Bruce Levin says is [@Turrientes2013] S1 File:
p0 = 1e-5
s = 0.001
pe = 0.1
S = array([[1,0],[0,1-s]])
p = array([p0, 1-p0])
for t in range(100000):
if p[0] > pe:
break
p = S.dot(p)
p /= p.sum()
print t
9312
What I say:
n = 500 # max number of mutations
mu = 0.0004 # Wielgoss2011
tau = 35 # mutator rate increase
s = 0.001 # selection per site
print "mutation rate:",mu
print "selection coef:",s
print "lowest fitness:",(1-s)**n
mutation rate: 0.0004 selection coef: 0.001 lowest fitness: 0.606378944861
considering only single deleterious mutations the mutation matrix is:
zeros_matrix = lambda n: matrix(zeros((n,n)))
mutation_matrix = lambda n,mu: diag([1-mu] * (n+1)) + diag([mu] * n,-1)
A = mutation_matrix(n,mu)
B = zeros_matrix(n+1)
C = zeros_matrix(n+1)
D = mutation_matrix(n,tau * mu)
M = np.bmat([[A, B], [C, D]])
the selection matrix is:
selection_matrix = lambda n,s: diag([(1-s)**i for i in range(n+1)])
A = selection_matrix(n,s)
B = zeros_matrix(n+1)
C = zeros_matrix(n+1)
D = selection_matrix(n,s)
S = np.bmat([[A, B], [C, D]])
the evolution matrix is:
E = array(M*S)
now for the evolution:
starting from mutation-free:
p = zeros((n+1)*2)
p[0] = p0
p[n+1] = 1-p0
print p
[ 0.00001 0. 0. ..., 0. 0. 0. ]
starting from mutation-selection balance:
from scipy.stats import poisson
p = array([p0*poisson(mu/s).pmf(i) for i in range(n+1)] + [(1-p0)*poisson(tau*mu/s).pmf(i) for i in range(n+1)])
print p
[ 0.0000067 0.00000268 0.00000054 ..., 0. 0. 0. ]
mean fitness at the mutation-selection balance:
sv = array([(1-s)**i for i in range(n+1)])
w_nm = sv.dot(p[:n+1])/p0
w_cm = sv.dot(p[n+1:])/(1-p0)
print w_nm, w_cm
0.999600079989 0.986097544263
evolve until NM reach pe
:
for t in range(1000000):
if p[0] > pe:
break
p = E.dot(p)
p /= p.sum()
print t
718
this is an order of magnitude lower than what Levin the result of Levin (~9000).
This notebook is intended for internal use. The author doesn't stand behind anything written here and no disrespect is intended.