%pylab inline
lecture = 11
Populating the interactive namespace from numpy and matplotlib
C:\Anaconda3\lib\site-packages\IPython\core\magics\pylab.py:160: UserWarning: pylab import has clobbered these variables: ['beta', 'f'] `%matplotlib` prevents importing * from pylab and numpy "\n`%matplotlib` prevents importing * from pylab and numpy"
III. Numerical Methods
IV. Numerical Examples in Python
If $f(t,y)$ is independent of $y$, solving the ODE is simply an integration.
In general, the integral equation is not any easier to solve.
If $f(t,y)$ is independent of $t$, the ODE is said to be autonomous
$\hspace{0.3in}$ where
$$ \begin{matrix} \bs{z} = \left[ \begin{matrix} y(t) \\ y'(t) \\ \vdots \\ y^{(n-1)}(t) \end{matrix} \right], \;\; \mbox{and} \;\; \end{matrix} \begin{matrix} \bs{g}(t, \bs{z}) = \left[ \begin{matrix} y'(t) \\ y''(t) \\ \vdots \\ y^{(n-1)}(t) \\ f(t,y(t),y'(t), \cdots, y^{(n-1)}(t)) \end{matrix} \right] \end{matrix} $$Interesting things one want to know about the IVP of the general first order systems of ODE:
Picard's theorem:
If $f(t,y)$ is Lipschitz continuous in $y$ in a neighborhood of $(t_0, y(t_0))$, then the IVP of the ODE has a unique solution in the neighborhood.
Define the Laplace transform of option price function $C(t,S)$ as
$$ \renewcommand{hC}{\hat{C}} \hC(z,\cdot) := \mathcal{L}[C](z) = \int_{0}^{\infty}C(t,\cdot)e^{-zt} \; dt $$Taking the Laplace transform of the Black-Scholes equation,
$$ z\hC = \frac{1}{2}\sigma^2S^2 \hC_{SS} + rS \hC_S - r \hC + C_0 $$Here $C_0$ is the time reversed payoff condition (*).
So one way to solve Black-Scholes is to solve an ODE and then invert the Laplace Transform.
Assume the short rate is affine under the risk-neutral measure
$$ dr_t = \kappa(\theta-r_t)dt + \sqrt{\sigma_1 +\sigma_2 r_t}\;dW_t $$Then bond prices are solution to the PDE:
$$ \frac{1}{2}P_{rr}(\sigma_1 +\sigma_2 r) + P_r\kappa(\theta - r)+ P_t -rP = 0 $$with $P(T,T) = 1$. Looking for solution in the form
$$ P(r,t,T) = e^{A(T-t) - B(T-t)r}, $$we find that $A(\cdot), B(\cdot)$ solve the following system of ODE
\begin{aligned} -B' & = \frac{1}{2}\sigma_2B^2 + \kappa B - 1 \\ A' & = \frac{1}{2}\sigma_1B^2 - \kappa \theta B \end{aligned}with $A(0) = B(0) = 0$.
Will focus on Finite Difference Methods(FDM) here.
Other methods for ODE/PDEs: Finite element methods, Spectral Methods, etc.
All methods for IVP of ODE are recursions that generate $y^{n+1}$ from previous $y^n$ together with evaluating the function $f(t, y)$ a few times around $t^n$.
Employing finite difference method typically means:
Generate a grid of points $(t_k, y_i)$ where we want to find the solutions
Substitute the derivatives in ODE/PDE with finite difference schemes, which converts the ODE/PDE into a system of algebraic equations.
Solve the system of algebraic equations.
Implement and debug the compute code.
Perform sanity check, error analysis, sensitivity analysis, etc, by any available means: intuitively, analytically or numerically.
For any FDM that are employed to solve practical problems, we should ask
Defined as the amount by which the exact solution does not satisfy the numerical scheme.
Just plug the exact values of the functions/variable into the FDM scheme and calculate the error, for Euler scheme, this is,
$\\$ where we used the exact ODE equation $y'(t^n) = f(t^n, y(t^n))$.
An FDM is consistent if the local truncation error goes to 0 as the step size goes to 0.
An FDM is consistent with order $q>1$ if $|\mathcal{N}_h y(t^n) | = O(h^q)$.
The Euler method is consistent with the second order.
Turns out being consistent is not enough.
Consider the model (for analyzing stability) ODE
The solution will explode if $|1+\lambda h| > 1$. For example, when $\lambda = -10$ and $h = 0.25$.
While the exact solution of the problem is $y(t) = e^{\lambda t}$, which decays exponentially with $\lambda <0$.
The problem is with error propagation of the FDM, we need more...
We need one more property: zero-stability
A method is called zero-stable if there are constants $h_0$ and $K$ so for any mesh functions $y_h$ and $z_h$ on an interval $[0,T]$ with $h \leq h_0$,
or the weaker version $$ |y^n - z^n| \leq K\left\{ |y^0 - z^0| + \max_{1 \leq j \leq N} | \mathcal{N}_h y(t^j) - \mathcal{N}_h z(t^j) | \right\} $$ for $1 \leq n \leq N$.
Zero-stability essentially says errors (e.g. roundoff errors, function evaluation errors) introduced in any step does not get magnified later on.
One-step FDM can be shown to be zero-stable given sufficiently small step size and if $f$ is Lipschitz continuous in $y$.
The Euler's method is said to be conditionally stable.
Its region of absolute stability is defined as the set of complex numbers $z = \lambda h$, such that the FDM solution decays to 0, that is
amounts to a unit disk with radius 1 and center $(-1,0)$.
The method is implicit: if $f(t,y)$ is nonlinear, we would have to solve a nonlinear equation to get $y^{n+1}$.
The local truncation error is
The method is second order, but only conditionally stable.
The is an example of a powerful class of mulit-step methods called predictor-corrector methods: the forward Euler as the predictor and the Crank-Nicholson is the corrector.
ODEs with rapidly decaying transients or with varying time scales often presents huge problems for numerical methods which are not A-Stable.
Example:
The solution to the problem can be found exactly: $y(t) = e^{-50t} + t$. It has a slow varying part $t$ and a rapidly decaying part $e^{-50t}$.
Using the explicit Euler's method requires a time step $h<2/50 = 0.04 << 1$, i.e. many time steps before reaching the reasonable solution.
In the homework, you will encounter another example, where in a system of first order ODEs, different components of the solution behave in a very different time scales, and the fast changing one dominates the step size in order for the scheme to be stable.
For stiff ODEs, you either choose really small step size with explicit method (very inefficient!) or choose an A-Stable method (it will cost more in each step).
scipy.integrate contains a collection of ODE solvers: mostly wrapped around the matured software package ODEPACK (A Fortran ODE solver package). New updates are available (please check).
Integrating "Van der Pol Oscillator" using odeint in scipy:
import numpy as np
import matplotlib
from scipy.integrate import odeint
r = 0.1
def f(y,t):
return [y[1], r*(1 - y[0]*y[0])*y[1] - y[0]]
# initial value
y0 = [2.0, 1.]
n = 10000
T = 3.0*r
h = T/n
# print h
ts = np.arange(0.0001,T, h)
ys = odeint(f, y0, ts)
fig = figure(figsize=[12, 4])
subplot(1, 2, 1)
plot(ts, ys[:, :], '.-')
xlabel('Time')
title('odeint')
ax = fig.add_subplot(122)
ax.set_axis_off()
ax.text(0, .5, "$\;\; y' = \mu (1-x^2)y - x$ ", size="xx-large");
ax.text(0, .6, "$\;\; x' = y$", size="xx-large");
ax.text(0, .75, "Integrating Van der Pol system", size="x-large");
show()
from scipy.integrate import ode
from mpl_toolkits.mplot3d import Axes3D
def lorenz_sys(t, q):
x = q[0]
y = q[1]
z = q[2]
# sigma, rho and beta are global.
f = [sigma * (y - x),
rho*x - y - x*z,
x*y - beta*z]
return f
#ic = [1.0, 2.0, 1.0]
ic = [0.01, 0.01, 0.01]
t0 = 0.0
t1 = 70.0
dt = 0.01
sigma = 10.0
rho = 28.0
beta = 8.0/3.0
#sigma = 28.0
#rho = 46.92
#beta = 4.0
solver = ode(lorenz_sys)
t = []
sol = []
solver.set_initial_value(ic, t0)
#solver.set_integrator('dop853')
solver.set_integrator('dopri5')
while solver.successful() and solver.t < t1:
solver.integrate(solver.t + dt)
t.append(solver.t)
sol.append(solver.y)
t = array(t)
sol = array(sol)
fig = figure()
ax = Axes3D(fig)
ax.plot(sol[:,0], sol[:,1], sol[:,2])
xlabel('x')
ylabel('y')
ax = fig.add_subplot(122)
ax.set_axis_off()
ax.text(2, .3, "$\;\; z' = x y -\kappa z$", size="xx-large");
ax.text(2, .4, "$\;\; y' = x ( \gamma - z ) - y $", size="xx-large");
ax.text(2, .5, "$\;\; x' = \sigma (y - x) $", size="xx-large");
ax.text(2, .65, "Integrating Lorenz system", size="x-large");
show()
Rewrite it as a system of first order ODEs. Find the region of absolute stability. (Hint: find the eigenvalue decomposition of the coefficient matrix, and the region of absolute stability is the domain where the step size is chosen so that each eigen value satisfies the condition for the 1D ODE.)