With this notebook, we demonstrate how the Power Method can be used to compute the eigenvector associated with the largest eigenvalue (in magnitude).
Be sure to make a copy!!!!
We start by creating a matrix with known eigenvalues and eigenvectors
How do we do this?
Experiment by changing the eigenvalues! What happens if you make the second entry on the diagonal equal to -4? Or what if you set 2 to -1?
import numpy as np
import laff
import flame
Lambda = np.matrix( ' 4., 0., 0., 0;\
0., 3., 0., 0;\
0., 0., 2., 0;\
0., 0., 0., 1' )
lambda0 = Lambda[ 0,0 ]
V = np.matrix( np.random.rand( 4,4 ) )
# normalize the columns of V to equal one
for j in range( 0, 4 ):
V[ :, j ] = V[ :, j ] / np.sqrt( np.transpose( V[:,j] ) * V[:, j ] )
A = V * Lambda * np.linalg.inv( V )
print( 'Lambda = ' )
print( Lambda)
print( 'V = ' )
print( V )
print( 'A = ' )
print( A )
# Pick a random starting vector
x = np.matrix( np.random.rand( 4,1 ) )
for i in range(0,10):
x = A * x
# normalize x to length one
x = x / np.sqrt( np.transpose( x ) * x )
print( 'Rayleigh quotient with vector x:', np.transpose( x ) * A * x / ( np.transpose( x ) * x ))
print( 'inner product of x with v0 :', np.transpose( x ) * V[ :, 0 ] )
print( ' ' )
In the above,
If you change the "3" on the diagonal to "-4", then you have two largest eigenvalues (in magnitude), and the vector $ x $ will end up in the space spanned by $ v_0 $ and $ v_1 $. You can check this by looking at $ ( I - V_L ( V_L^T V_L )^{-1} V_L^T ) x $, where $V_L $ equals the matrix with $ v_0 $ and $ v_1 $ as its columns, to see if the vector orthogonal to $ {\cal C}( V_L ) $ converges to zero. This is seen in the following code block:
w = x - V[ :,0:2 ] * np.linalg.inv( np.transpose( V[ :,0:2 ] ) * V[ :,0:2 ] ) * np.transpose( V[ :,0:2 ] ) * x
print( 'Norm of component orthogonal: ', np.linalg.norm( w ) )