This notebook walks you through how to implement $ y := A x + y $ that "marches" through the matrix in an alternative way.
We will use some functions that are part of our laff library (of which this function will become a part) as well as some routines from the FLAME API (Application Programming Interface) that allows us to write code that closely resembles how we typeset algorithms using the FLAME notation. These functions are imported with the "import laff as laff" and "import flame" statements.
Mvmult_n_unb_var1B( A, x, y )
This routine, given $ A \in \mathbb{R}^{n \times n} $, $ x \in \mathbb{R}^n $, and $ y \in \mathbb{R}^n $, computes $ y := A x + y $. The "n" in the title of the routine indicates that this is the "no transpose" matrix-vector multiplication. The "B" means this is the algorithm that marches through matrices from top-left to bottom-right.
The specific laff functions we will use are
laff.dots( x, y, alpha )
which computes $ \alpha := x^T y + \alpha $. Use the Spark webpage to generate a code skeleton. (Make sure you adjust the name of the routine.)
import flame
import laff as laff
def Mvmult_n_unb_var1B(A, x, y):
ATL, ATR, \
ABL, ABR = flame.part_2x2(A, \
0, 0, 'TL')
xT, \
xB = flame.part_2x1(x, \
0, 'TOP')
yT, \
yB = flame.part_2x1(y, \
0, 'TOP')
while ATL.shape[0] < A.shape[0]:
A00, a01, A02, \
a10t, alpha11, a12t, \
A20, a21, A22 = flame.repart_2x2_to_3x3(ATL, ATR, \
ABL, ABR, \
1, 1, 'BR')
x0, \
chi1, \
x2 = flame.repart_2x1_to_3x1(xT, \
xB, \
1, 'BOTTOM')
y0, \
psi1, \
y2 = flame.repart_2x1_to_3x1(yT, \
yB, \
1, 'BOTTOM')
#------------------------------------------------------------#
laff.dots( a10t, x0, psi1 )
laff.dots( alpha11, chi1, psi1 )
laff.dots( a12t, x2, psi1 )
#------------------------------------------------------------#
ATL, ATR, \
ABL, ABR = flame.cont_with_3x3_to_2x2(A00, a01, A02, \
a10t, alpha11, a12t, \
A20, a21, A22, \
'TL')
xT, \
xB = flame.cont_with_3x1_to_2x1(x0, \
chi1, \
x2, \
'TOP')
yT, \
yB = flame.cont_with_3x1_to_2x1(y0, \
psi1, \
y2, \
'TOP')
flame.merge_2x1(yT, \
yB, y)
Let's quickly test the routine by creating a 4 x 4 matrix and related vectors, performing the computation.
from numpy import random
from numpy import matrix
A = matrix( random.rand( 4,4 ) )
x = matrix( random.rand( 4,1 ) )
y = matrix( random.rand( 4,1 ) )
yold = matrix( random.rand( 4,1 ) )
print( 'A before =' )
print( A )
print( 'x before =' )
print( x )
print( 'y before =' )
print( y )
laff.copy( y, yold ) # save the original vector y
Mvmult_n_unb_var1B( A, x, y )
print( 'y after =' )
print( y )
print( 'y - ( A * x + yold ) = ' )
print( y - ( A * x + yold ) )
Bingo, it seems to work! (Notice that we are doing floating point computations, which means that due to rounding you may not get an exact "0", but it should be close.)
Copy and paste the code into PictureFLAME , a webpage where you can watch your routine in action. Just cut and paste into the box.
Disclaimer: we implemented a VERY simple interpreter. If you do something wrong, we cannot guarantee the results. But if you do it right, you are in for a treat.
If you want to reset the problem, just click in the box into which you pasted the code and hit "next" again.
Mvmult_n_unb_var2B( A, x, y )
This routine, given $ A \in \mathbb{R}^{n \times n} $, $ x \in \mathbb{R}^n $, and $ y \in \mathbb{R}^n $, computes $ y := A x + y $. The "n" in the name of the routine indicates this is the "no transpose" matrix-vector multiplication. The "B" means this is the algorithm that marches through matrices from top-left to bottom-right.
The specific laff functions we will use are
laff.axpy( alpha, x, y )
which computes $ y := \alpha x + y $. Use the Spark webpage to generate a code skeleton. (Make sure you adjust the name of the routine.)
import flame
import laff as laff
def Mvmult_n_unb_var2B(A, x, y):
ATL, ATR, \
ABL, ABR = flame.part_2x2(A, \
0, 0, 'TL')
xT, \
xB = flame.part_2x1(x, \
0, 'TOP')
yT, \
yB = flame.part_2x1(y, \
0, 'TOP')
while ATL.shape[0] < A.shape[0]:
A00, a01, A02, \
a10t, alpha11, a12t, \
A20, a21, A22 = flame.repart_2x2_to_3x3(ATL, ATR, \
ABL, ABR, \
1, 1, 'BR')
x0, \
chi1, \
x2 = flame.repart_2x1_to_3x1(xT, \
xB, \
1, 'BOTTOM')
y0, \
psi1, \
y2 = flame.repart_2x1_to_3x1(yT, \
yB, \
1, 'BOTTOM')
#------------------------------------------------------------#
laff.axpy( chi1, a01, y0 )
laff.axpy( chi1, alpha11, psi1 )
laff.axpy( chi1, a21, y2 )
#------------------------------------------------------------#
ATL, ATR, \
ABL, ABR = flame.cont_with_3x3_to_2x2(A00, a01, A02, \
a10t, alpha11, a12t, \
A20, a21, A22, \
'TL')
xT, \
xB = flame.cont_with_3x1_to_2x1(x0, \
chi1, \
x2, \
'TOP')
yT, \
yB = flame.cont_with_3x1_to_2x1(y0, \
psi1, \
y2, \
'TOP')
flame.merge_2x1(yT, \
yB, y)
Let's quickly test the routine by creating a 4 x 4 matrix and related vectors, performing the computation.
from numpy import random
from numpy import matrix
A = matrix( random.rand( 4,4 ) )
x = matrix( random.rand( 4,1 ) )
y = matrix( random.rand( 4,1 ) )
yold = matrix( random.rand( 4,1 ) )
print( 'A before =' )
print( A )
print( 'x before =' )
print( x )
print( 'y before =' )
print( y )
laff.copy( y, yold ) # save the original vector y
Mvmult_n_unb_var2B( A, x, y )
print( 'y after =' )
print( y )
print( 'y - ( A * x + yold ) = ' )
print( y - ( A * x + yold ) )
Bingo, it seems to work! (Notice that we are doing floating point computations, which means that due to rounding you may not get an exact "0".)
Copy and paste the code into PictureFLAME , a webpage where you can watch your routine in action. Just cut and paste into the box.
Disclaimer: we implemented a VERY simple interpreter. If you do something wrong, we cannot guarantee the results. But if you do it right, you are in for a treat.
If you want to reset the problem, just click in the box into which you pasted the code and hit "next" again.