We saw in the previous notebook how to set up a matrix by integrating between pairs of basis functions. In this notebook we will solve for the eigenvalues and eigenvectors of the Hamiltonian for a square well, exploring what happens when we use a finite basis set, and introduce changes to the potential in the square well.
We will start by using the eigenbasis (since it's simple) and will add small changes to the potential. This will give us some insight into the effect of perturbations (and will feed directly into our studies of perturbation theory). So we'll first set up the bits and pieces of code we'll need, some taken from the previous notebook.
# Import libraries and set up in-line plotting.
%matplotlib inline
import matplotlib.pyplot as pl
import numpy as np
# This is a new library - linear algebra includes solving for eigenvalues & eigenvectors of matrices
import numpy.linalg as la
# Define the eigenbasis - normalisation needed elsewhere
def eigenbasis_sw(n,width,norm,x):
"""The eigenbasis for a square well, running from 0 to a, sin(n pi x/a)"""
fac = np.pi*n/width
return norm*np.sin(fac*x)
# We will also define the second derivative for kinetic energy (KE)
def d2eigenbasis_sw(n,width,norm,x):
"""The eigenbasis for a square well, running from 0 to a, sin(n pi x/a)"""
fac = np.pi*n/width
return -fac*fac*norm*np.sin(fac*x)
# Define the x-axis
width = 1.0
num_x_points = 101
x = np.linspace(0.0,width,num_x_points)
dx = width/(num_x_points - 1)
# Integrate two functions over the width of the well
def integrate_functions(f1,f2,size_x,dx):
"""Integrate two functions over defined x range"""
sum = 0.0
for i in range(size_x):
sum = sum + f1[i]*f2[i]
sum = sum*dx
return sum
# Now set up the array of basis functions - specify the size of the basis
num_basis = 10
# These arrays will each hold an array of functions
basis_array = np.zeros((num_basis,num_x_points))
d2basis_array = np.zeros((num_basis,num_x_points))
# Loop over first num_basis basis states, normalise and create an array
# NB the basis_array will start from 0
for i in range(num_basis):
n = i+1
# Calculate A = <phi_n|phi_n>
integral = integrate_functions(eigenbasis_sw(n,width,1.0,x),eigenbasis_sw(n,width,1.0,x),num_x_points,dx)
# Use 1/sqrt{A} as normalisation constant
normalisation = 1.0/np.sqrt(integral)
basis_array[i,:] = eigenbasis_sw(n,width,normalisation,x)
d2basis_array[i,:] = d2eigenbasis_sw(n,width,normalisation,x)
We have now set up all the mechanics that we need to create matrices for different Hamiltonians, except for one thing: the potential. We have an implicit potential already in the infinite square well (we set $V=\infty$ when $x=0$ or $x=a$). If we create a potential function, then we can change this and create different Hamiltonian matrices. So we'll do this next.
# Define the potential in the square well
def square_well_potential(x,V,a):
"""Potential for a particle in a square well, expecting two arrays: x, V(x), and potential height, a"""
for i in range(x.size):
V[i] = 0.0
# Let's define a small bump in the middle of the well
i_mid = x.size/2
V[i_mid-1] = a
V[i_mid] = a
V[i_mid+1] = a
# Plot to ensure that we know what we're getting
pl.plot(x,V)
# Declare space for this potential (Vbump) and call the routine
Vbump = np.linspace(0.0,width,num_x_points)
pot_bump = square_well_potential(x,Vbump,0.5)
Now that we have the potential, we need to build the matrix (remember that we have set $\hbar = m = 1$):
$H_{mn} = \langle \phi_m \vert -\frac{1}{2}\nabla^2 + \hat{V}\vert\phi_n\rangle$
By diagonalisation, we will find the eigenvalues.
It's worth noting that we act with $\hat{V}$ by multiplication in position representation:
$$\langle x \vert \hat{V} \vert \phi_m\rangle = V(x) \phi_m(x)$$Then we make matrix elements by integration in position representation; as we've defined a grid on which to represent the system, we'll just sum over the values on the grid (though there are much more accurate ways to do numerical integration).
$$H_{ij} = \langle \phi_i \vert \hat{H} \vert \phi_j \rangle = \int dx \phi_i^\star(x) \hat{H} \phi_j(x)$$We will print out the matrix of just the potential first: $\langle \phi_m \vert \hat{V} \vert \phi_n\rangle$. This will be useful when thinking about perturbation theory (the change in eigenvalues should be the same as the diagonal elements of the potential matrix, to first order in the potential).
# Declare space for the matrix elements - simplest with the identity function
Hmat = np.eye(num_basis)
# Define a function to act on a basis function with the potential
def add_pot_on_basis(Hphi,V,phi):
for i in range(V.size):
Hphi[i] = Hphi[i] + V[i]*phi[i]
print "Potential matrix elements:"
# Loop over basis functions phi_n (the bra in the matrix element)
# Calculate and output matrix elements of the potential
for n in range(num_basis):
# Loop over basis functions phi_m (the ket in the matrix element)
for m in range(num_basis):
# Act with H on phi_m and store in H_phi_m
H_phi_m = np.zeros(num_x_points)
add_pot_on_basis(H_phi_m,Vbump,basis_array[m])
# Create matrix element by integrating
H_mn = integrate_functions(basis_array[n],H_phi_m,num_x_points,dx)
# The comma at the end prints without a new line; the %8.3f formats the number
print "%8.3f" % H_mn,
# This print puts in a new line when we have finished looping over m
print
print
print "Full Hamiltonian"
# Loop over basis functions phi_n (the bra in the matrix element)
# Calculate and store the matrix elements for the full Hamiltonian
for n in range(num_basis):
# Loop over basis functions phi_m (the ket in the matrix element)
for m in range(num_basis):
# Act with H on phi_m and store in H_phi_m
# First the kinetic energy
H_phi_m = -0.5*d2basis_array[m]
# Now the potential
add_pot_on_basis(H_phi_m,Vbump,basis_array[m])
# Create matrix element by integrating
H_mn = integrate_functions(basis_array[n],H_phi_m,num_x_points,dx)
Hmat[m,n] = H_mn
# The comma at the end prints without a new line; the %8.3f formats the number
print "%8.3f" % H_mn,
# This print puts in a new line when we have finished looping over m
print
Potential matrix elements: 0.030 0.000 -0.030 -0.000 0.030 0.000 -0.030 -0.000 0.029 0.000 0.000 0.000 -0.000 -0.000 0.000 0.000 -0.000 -0.000 0.000 0.000 -0.030 -0.000 0.030 0.000 -0.030 -0.000 0.029 0.000 -0.029 -0.000 -0.000 -0.000 0.000 0.000 -0.000 -0.000 0.000 0.001 -0.000 -0.001 0.030 0.000 -0.030 -0.000 0.030 0.000 -0.029 -0.000 0.029 0.000 0.000 0.000 -0.000 -0.000 0.000 0.001 -0.000 -0.001 0.000 0.001 -0.030 -0.000 0.029 0.000 -0.029 -0.000 0.029 0.000 -0.029 -0.000 -0.000 -0.000 0.000 0.001 -0.000 -0.001 0.000 0.001 -0.000 -0.002 0.029 0.000 -0.029 -0.000 0.029 0.000 -0.029 -0.000 0.028 0.000 0.000 0.000 -0.000 -0.001 0.000 0.001 -0.000 -0.002 0.000 0.002 Full Hamiltonian 4.965 -0.000 -0.030 -0.000 0.030 0.000 -0.030 -0.000 0.029 -0.000 -0.000 19.739 0.000 -0.000 -0.000 0.000 0.000 -0.000 -0.000 0.000 -0.030 -0.000 44.443 0.000 -0.030 0.000 0.029 -0.000 -0.029 0.000 -0.000 -0.000 0.000 78.957 0.000 -0.000 -0.000 0.001 -0.000 -0.001 0.030 0.000 -0.030 0.000 123.400 0.000 -0.029 0.000 0.029 -0.000 0.000 0.000 0.000 -0.000 0.000 177.654 0.000 -0.001 0.000 0.001 -0.030 0.000 0.029 -0.000 -0.029 -0.000 241.834 -0.000 -0.029 -0.000 0.000 -0.000 -0.000 0.001 0.000 -0.001 -0.000 315.829 -0.000 -0.002 0.029 -0.000 -0.029 -0.000 0.029 0.000 -0.029 -0.000 399.747 0.000 -0.000 0.000 0.000 -0.001 -0.000 0.001 -0.000 -0.002 0.000 493.482
Notice that two things have changed compared to the perfect square well: first, the diagonal elements are slightly larger; second, there are now off-diagonal elements. Does it surprise you that these alternate (i.e. only in odd row and columns) ? Think about the symmetries of the system, particularly of the wavefunctions.
What effect will this have on the eigenvalues and eigenvectors ? We'll diagonalise, print out the eigenvalues and plot the first few eigenvectors (as well as looking at their coefficients to get a rough idea of how much they've changed).
# Solve using linalg module of numpy (which we've imported as la above)
eigval, eigvec = la.eigh(Hmat)
# This call above does the entire solution for the eigenvalues and eigenvectors !
# Print results roughly, though apply precision of 4 to the printing
np.set_printoptions(precision=4)
print eigval
print eigvec[0]
print eigvec[1]
print eigvec[2]
[ 4.9647 19.7393 44.443 78.9571 123.3996 177.6536 241.8344 315.8286 399.7474 493.4821] [ 1.0000e+00 -2.0635e-17 7.5696e-04 -7.1521e-17 -2.5115e-04 7.7171e-17 -1.2461e-04 3.6019e-17 -7.3983e-05 5.9905e-17] [ 2.2443e-17 1.0000e+00 -2.2204e-15 -2.6578e-06 2.2204e-16 1.4901e-06 5.5511e-17 1.0548e-06 3.8858e-16 -8.1916e-07] [ 7.5708e-04 -2.2204e-15 -1.0000e+00 -3.4056e-16 3.7569e-04 1.3964e-16 1.4913e-04 1.0053e-16 8.1990e-05 -2.4266e-16]
We can see that the eigenvalues look close to the perfect well values (we'll check them properly below). The eigenvector coefficients show a single dominant value (corresponding to the unperturbed eigenvector), with very small contributions from other eigenvectors. Now print the eigenvalues and calculate the change.
# Now print out eigenvalues and the eigenvalues of the perfect square well, and the difference
print " Changed Original Difference"
for i in range(num_basis):
n = i+1
print "%8.3f %8.3f %8.3f" % (eigval[i],n*n*np.pi*np.pi/2.0,eigval[i] - n*n*np.pi*np.pi/2.0)
Changed Original Difference 4.965 4.935 0.030 19.739 19.739 0.000 44.443 44.413 0.030 78.957 78.957 0.000 123.400 123.370 0.030 177.654 177.653 0.001 241.834 241.805 0.029 315.829 315.827 0.001 399.747 399.719 0.028 493.482 493.480 0.002
Compare the differences in eigenvalues due to the small potential we've added to the diagonal terms of the potential matrix. How close is the agreement ? You might like to change the magnitude of the potential (higher up - it's passed as an argument to the potential function) and see how this agreement changes.
Now we'll plot the eigenvectors (after building them) and look at the change with respect to the eigenvectors of the original system. Remember that any function in this space can be written as:
$$\vert\psi\rangle = \sum_i c_i \vert \phi_i \rangle$$We'll use this to build the eigenfunctions of the perturbed system. In this case, $c_i$ are the coefficients in the eigenvector of the matrix, and $\vert\phi_i\rangle$ are the basis functions.
# Define a figure to take two plots
fig = pl.figure(figsize=[12,3])
# Add subplots: number in y, x, index number
ax = fig.add_subplot(121,autoscale_on=False,xlim=(0,1),ylim=(-2,2))
ax.set_title("Eigenvectors for changed system")
ax2 = fig.add_subplot(122,autoscale_on=False,xlim=(0,1),ylim=(-0.002,0.002))
ax2.set_title("Difference to perfect eigenvectors")
for m in range(4): # Plot the first four states
psi = np.zeros(num_x_points)
for i in range(num_basis):
psi = psi+eigvec[i,m]*basis_array[i]
if eigvec[m,m] < 0: # This is just to ensure that psi and the basis function have the same phase
psi = -psi
ax.plot(x,psi)
# Now subtract the unperturbed eigenvector to see the change from the perturbation
psi = psi - basis_array[m]
ax2.plot(x,psi)
Notice how the eigenvectors for this changed system are very close to the original system. This is a perfect example of a good system to study with perturbation theory. The changes in the eigenvectors can be related to matrix elements of the potential and the eigenvalues of the unperturbed system, but we won't do this just yet.
In the next notebook, we will explore a more complex perturbation (a truncated quantum harmonic oscillator, or QHO) and the effects of varying the magnitude of the perturbation. As well as improving your understanding of matrix mechanics, this should give a useful insight into the limitations of perturbation theory and finite basis sets.