Sebastian Raschka
last updated: 05/09/2014

#### The code in this notebook was executed in Python 3.4.0¶

I am really looking forward to your comments and suggestions to improve and
extend this little collection! Just send me a quick note
or Email: [email protected]

# Sections¶ (Note that this chart just reflects my rather objective thoughts after experimenting with Cython, and it is not based on real numbers or benchmarks.)

# Introduction¶

Linear regression via the least squares method is the simplest approach to performing a regression analysis of a dependent and a explanatory variable. The objective is to find the best-fitting straight line through a set of points that minimizes the sum of the squared offsets from the line.
The offsets come in 2 different flavors: perpendicular and vertical - with respect to the line.  As Michael Burger summarizes it nicely in his article "Problems of Linear Least Square Regression - And Approaches to Handle Them": "the perpendicular offset method delivers a more precise result but is are more complicated to handle. Therefore normally the vertical offsets are used."
Here, we will also use the method of computing the vertical offsets.

In more mathematical terms, our goal is to compute the best fit to n points $(x_i, y_i)$ with $i=1,2,...n,$ via linear equation of the form
$f(x) = a\cdot x + b$.
We further have to assume that the y-component is functionally dependent on the x-component.
In a cartesian coordinate system, $b$ is the intercept of the straight line with the y-axis, and $a$ is the slope of this line.

In order to obtain the parameters for the linear regression line for a set of multiple points, we can re-write the problem as matrix equation
$\pmb X \; \pmb a = \pmb y$

$\Rightarrow\Bigg[ \begin{array}{cc} x_1 & 1 \\ ... & 1 \\ x_n & 1 \end{array} \Bigg]$ $\bigg[ \begin{array}{c} a \\ b \end{array} \bigg]$ $=\Bigg[ \begin{array}{c} y_1 \\ ... \\ y_n \end{array} \Bigg]$

With a little bit of calculus, we can rearrange the term in order to obtain the parameter vector $\pmb a = [a\;b]^T$

$\Rightarrow \pmb a = (\pmb X^T \; \pmb X)^{-1} \pmb X^T \; \pmb y$

The more classic approach to obtain the slope parameter $a$ and y-axis intercept $b$ would be:

$a = \frac{S_{x,y}}{\sigma_{x}^{2}}\quad$ (slope)

$b = \bar{y} - a\bar{x}\quad$ (y-axis intercept)

where

$S_{xy} = \sum_{i=1}^{n} (x_i - \bar{x})(y_i - \bar{y})\quad$ (covariance)

$\sigma{_x}^{2} = \sum_{i=1}^{n} (x_i - \bar{x})^2\quad$ (variance)

# Least squares fit implementations¶

### 1. The matrix approach in (C)Python and NumPy¶

First, let us implement the equation:

$\pmb a = (\pmb X^T \; \pmb X)^{-1} \pmb X^T \; \pmb y$

which I will refer to as the "matrix approach".

#### Matrix approach implemented in NumPy and (C)Python¶

In :
import numpy as np

def py_matrix_lstsqr(x, y):
""" Computes the least-squares solution to a linear matrix equation. """
X = np.vstack([x, np.ones(len(x))]).T
return (np.linalg.inv(X.T.dot(X)).dot(X.T)).dot(y)


### 2. The classic approach in (C)Python, Cython, and Numba¶

Next, we will calculate the parameters separately, using standard library functions in Python only, which I will call the "classic approach".

$a = \frac{S_{x,y}}{\sigma_{x}^{2}}\quad$ (slope)

$b = \bar{y} - a\bar{x}\quad$ (y-axis intercept)

Note: I refrained from using list comprehensions and convenience functions such as zip() in order to maximize the performance for the Cython compilation into C code in the later sections.

#### Implemented in (C)Python¶

In :
def py_classic_lstsqr(x, y):
""" Computes the least-squares solution to a linear matrix equation. """
len_x = len(x)
x_avg = sum(x)/len_x
y_avg = sum(y)/len(y)
var_x = 0
cov_xy = 0
for i in range(len_x):
temp = (x[i] - x_avg)
var_x += temp**2
cov_xy += temp*(y[i] - y_avg)
slope = cov_xy / var_x
y_interc = y_avg - slope*x_avg
return (slope, y_interc)


#### Implemented in Cython¶

Maybe we can speed things up a little bit via Cython's C-extensions for Python. Cython is basically a hybrid between C and Python and can be pictured as compiled Python code with type declarations.
Since we are working in an IPython notebook here, we can make use of the very convenient IPython magic: It will take care of the conversion to C code, the compilation, and eventually the loading of the function.

In :
%load_ext cythonmagic

In :
%%cython
def cy_classic_lstsqr(x, y):
""" Computes the least-squares solution to a linear matrix equation. """
cdef double x_avg, y_avg, var_x, cov_xy,\
slope, y_interc, x_i, y_i
cdef int len_x
len_x = len(x)
x_avg = sum(x)/len_x
y_avg = sum(y)/len(y)
var_x = 0
cov_xy = 0
for i in range(len_x):
temp = (x[i] - x_avg)
var_x += temp**2
cov_xy += temp*(y[i] - y_avg)
slope = cov_xy / var_x
y_interc = y_avg - slope*x_avg
return (slope, y_interc)


#### Implemented in Numba¶

Like we did with Cython before, we will use the minimalist approach to Numba and see how the two - Cython and Numba - compare against each other.

Numba is using the LLVM compiler infrastructure for compiling Python code to machine code. Its strength is to work with NumPy arrays to speed-up the code. If you want to read more about Numba, please see refer to the original website and documentation.

In :
from numba import jit

@jit
def numba_classic_lstsqr(x, y):
""" Computes the least-squares solution to a linear matrix equation. """
len_x = len(x)
x_avg = sum(x)/len_x
y_avg = sum(y)/len(y)
var_x = 0
cov_xy = 0
for i in range(len_x):
temp = (x[i] - x_avg)
var_x += temp**2
cov_xy += temp*(y[i] - y_avg)
slope = cov_xy / var_x
y_interc = y_avg - slope*x_avg
return (slope, y_interc)


### 3. Using the numpy.linalg.lstsq function¶

For our convenience, numpy has a function that can computes the least squares solution of a linear matrix equation. For more information, please refer to the documentation.

In :
def numpy_lstsqr(x, y):
""" Computes the least-squares solution to a linear matrix equation. """
X = np.vstack([x, np.ones(len(x))]).T
return np.linalg.lstsq(X,y)


### 4. Using the scipy.stats.linregress function¶

Also scipy has a least squares function, scipy.stats.linregress(), which returns a tuple of 5 different attributes, where the 1st value in the tuple is the slope, and the second value is the y-axis intercept, respectively.
The documentation for this function can be found here.

In :
import scipy.stats

def scipy_lstsqr(x,y):
""" Computes the least-squares solution to a linear matrix equation. """
return scipy.stats.linregress(x, y)[0:2]


# Generating sample data and benchmarking¶

#### Visualization¶

To check how our dataset is distributed, and how the least squares regression line looks like, we will plot the results in a scatter plot.
Note that we are only using our "matrix approach" to visualize the results - for simplicity. We expect all 4 approaches to produce similar results, which we will confirm after visualizing the data.

In :
%matplotlib inline

In :
from matplotlib import pyplot as plt
import random

random.seed(12345)

x = [x_i*random.randrange(8,12)/10 for x_i in range(500)]
y = [y_i*random.randrange(8,12)/10 for y_i in range(100,600)]

slope, intercept = py_matrix_lstsqr(x, y)

line_x = [round(min(x)) - 1, round(max(x)) + 1]
line_y = [slope*x_i + intercept for x_i in line_x]

plt.figure(figsize=(8,8))
plt.scatter(x,y)
plt.plot(line_x, line_y, color='red', lw='2')

plt.ylabel('y')
plt.xlabel('x')
plt.title('Linear regression via least squares fit')

ftext = 'y = ax + b = {:.3f} + {:.3f}x'\
.format(slope, intercept)
plt.figtext(.15,.8, ftext, fontsize=11, ha='left')

plt.show() #### Comparing the results from the different implementations¶

As mentioned above, let us now confirm that the different implementations computed the same parameters (i.e., slope and y-axis intercept) as solution of the linear equation.

In :
import prettytable

params = [appr(x,y) for appr in [py_matrix_lstsqr, py_classic_lstsqr, numpy_lstsqr, scipy_lstsqr]]

print(params)

fit_table = prettytable.PrettyTable(["", "slope", "y-intercept"])
fit_table.add_row(["matrix approach", params, params])
fit_table.add_row(["classic approach", params, params])
fit_table.add_row(["numpy function", params, params])
fit_table.add_row(["scipy function", params, params])

print(fit_table)

[array([   0.95181895,  107.01399744]), (0.9518189531912674, 107.01399744459181), array([   0.95181895,  107.01399744]), (0.95181895319126764, 107.01399744459175)]
+------------------+--------------------+--------------------+
|                  |       slope        |    y-intercept     |
+------------------+--------------------+--------------------+
| matrix approach  |   0.951818953191   |   107.013997445    |
| classic approach | 0.9518189531912674 | 107.01399744459181 |
|  numpy function  |   0.951818953191   |   107.013997445    |
|  scipy function  |   0.951818953191   |   107.013997445    |
+------------------+--------------------+--------------------+


# Performance growth rates: (C)Python vs. Cython vs. Numba¶

Now, finally let us take a look at the effect of different sample sizes on the relative performances for each approach.

In :
import timeit
import random
random.seed(12345)

funcs =  ['py_classic_lstsqr', 'cy_classic_lstsqr', 'numba_classic_lstsqr']

orders_n = [10**n for n in range(1, 7)]
perf1 = {f:[] for f in funcs}

for n in orders_n:
x_list = ([x_i*np.random.randint(8,12)/10 for x_i in range(n)])
y_list = ([y_i*np.random.randint(10,14)/10 for y_i in range(n)])
for f in funcs:
perf1[f].append(timeit.Timer('%s(x_list,y_list)' %f,
'from __main__ import %s, x_list, y_list' %f).timeit(1000))

In :
from matplotlib import pyplot as plt

labels = [ ('py_classic_lstsqr', '"classic" least squares in reg. (C)Python'),
('cy_classic_lstsqr', '"classic" least squares in Cython'),
('numba_classic_lstsqr','"classic" least squares in Numba')
]

plt.rcParams.update({'font.size': 12})

fig = plt.figure(figsize=(10,8))
for lb in labels:
plt.plot(orders_n, perf1[lb], alpha=0.5, label=lb, marker='o', lw=3)
plt.xlabel('sample size n')
plt.ylabel('time per computation in milliseconds [ms]')
#plt.xlim([1,max(orders_n) + max(orders_n) * 10])
plt.legend(loc=4)
plt.grid()
plt.xscale('log')
plt.yscale('log')
max_perf = max( py/cy for py,cy in zip(perf1['py_classic_lstsqr'],
perf1['cy_classic_lstsqr']) )
min_perf = min( py/cy for py,cy in zip(perf1['py_classic_lstsqr'],
perf1['cy_classic_lstsqr']) )
ftext = 'Using Cython is {:.2f}x to '\
'{:.2f}x faster than regular (C)Python'\
.format(min_perf, max_perf)
plt.figtext(.14,.75, ftext, fontsize=11, ha='left')
plt.title('Performance of least square fit implementations')
plt.show() # Performance growth rates: NumPy and SciPy library functions¶

Okay, now that we have seen that Cython improved the performance of our Python code and that the matrix equation (using NumPy) performed even better, let us see how they compare against the in-built NumPy and SciPy library functions.

Note that we are now passing numpy.arrays to the NumPy, SciPy, and (C)Python matrix functions (not to the Cython implemtation though!), since they are optimized for it.

In :
import timeit
import random
random.seed(12345)

funcs =  ['cy_classic_lstsqr', 'py_matrix_lstsqr',
'numpy_lstsqr', 'scipy_lstsqr']

orders_n = [10**n for n in range(1, 7)]
perf2 = {f:[] for f in funcs}

for n in orders_n:
x_list = [x_i*random.randrange(8,12)/10 for x_i in range(n)]
y_list = [y_i*random.randrange(10,14)/10 for y_i in range(n)]
x_ary = np.asarray(x_list)
y_ary = np.asarray(y_list)
for f in funcs:
if f != 'cy_classic_lstsqr':
perf2[f].append(timeit.Timer('%s(x_ary,y_ary)' %f,
'from __main__ import %s, x_ary, y_ary' %f).timeit(1000))
else:
perf2[f].append(timeit.Timer('%s(x_list,y_list)' %f,
'from __main__ import %s, x_list, y_list' %f).timeit(1000))

In :
labels = [ ('cy_classic_lstsqr', '"classic" least squares in Cython (numpy arrays)'),
('py_matrix_lstsqr', '"matrix" least squares in (C)Python and NumPy'),
('numpy_lstsqr', 'in NumPy'),
('scipy_lstsqr','in SciPy'),
]

plt.rcParams.update({'font.size': 12})

fig = plt.figure(figsize=(10,8))
for lb in labels:
plt.plot(orders_n, perf2[lb], alpha=0.5, label=lb, marker='o', lw=3)
plt.xlabel('sample size n')
plt.ylabel('time per computation in milliseconds [ms]')
#plt.xlim([1,max(orders_n) + max(orders_n) * 10])
plt.legend(loc=4)
plt.grid()
plt.xscale('log')
plt.yscale('log')
plt.title('Performance of least square fit implementations')
plt.show() # Bonus: How to use Cython without the IPython magic¶

IPython's notebook is really great for explanatory analysis and documentation, but what if we want to compile our Python code via Cython without letting IPython's magic doing all the work?
These are the steps you would need.

#### 1. Creating a .pyx file containing the the desired code or function.¶

In :
%%file ccy_classic_lstsqr.pyx

def ccy_classic_lstsqr(x, y):
""" Computes the least-squares solution to a linear matrix equation. """
x_avg = sum(x)/len(x)
y_avg = sum(y)/len(y)
var_x = sum([(x_i - x_avg)**2 for x_i in x])
cov_xy = sum([(x_i - x_avg)*(y_i - y_avg) for x_i,y_i in zip(x,y)])
slope = cov_xy / var_x
y_interc = y_avg - slope*x_avg
return (slope, y_interc)

Writing ccy_classic_lstsqr.pyx


#### 2. Creating a simple setup file¶

In :
%%file setup.py

from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext

setup(
cmdclass = {'build_ext': build_ext},
ext_modules = [Extension("ccy_classic_lstsqr", ["ccy_classic_lstsqr.pyx"])]
)

Writing setup.py


#### 3. Building and Compiling¶

In :
!python3 setup.py build_ext --inplace

running build_ext
cythoning ccy_classic_lstsqr.pyx to ccy_classic_lstsqr.c
building 'ccy_classic_lstsqr' extension
creating build
creating build/temp.macosx-10.6-intel-3.4
/usr/bin/clang -fno-strict-aliasing -Werror=declaration-after-statement -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -arch i386 -arch x86_64 -g -I/Library/Frameworks/Python.framework/Versions/3.4/include/python3.4m -c ccy_classic_lstsqr.c -o build/temp.macosx-10.6-intel-3.4/ccy_classic_lstsqr.o
ccy_classic_lstsqr.c:2040:28: warning: unused function '__Pyx_PyObject_AsString'
[-Wunused-function]
static CYTHON_INLINE char* __Pyx_PyObject_AsString(PyObject* o) {
^
ccy_classic_lstsqr.c:2037:32: warning: unused function
'__Pyx_PyUnicode_FromString' [-Wunused-function]
static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(char* c_str) {
^
ccy_classic_lstsqr.c:2104:26: warning: unused function '__Pyx_PyObject_IsTrue'
[-Wunused-function]
static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject* x) {
^
ccy_classic_lstsqr.c:2159:33: warning: unused function '__Pyx_PyIndex_AsSsize_t'
[-Wunused-function]
static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject* b) {
^
ccy_classic_lstsqr.c:2188:33: warning: unused function '__Pyx_PyInt_FromSize_t'
[-Wunused-function]
static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t ival) {
^
ccy_classic_lstsqr.c:1584:32: warning: unused function '__Pyx_PyInt_From_long'
[-Wunused-function]
static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value) {
^
ccy_classic_lstsqr.c:1631:27: warning: function '__Pyx_PyInt_As_long' is not
needed and will not be emitted [-Wunneeded-internal-declaration]
static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *x) {
^
ccy_classic_lstsqr.c:1731:26: warning: function '__Pyx_PyInt_As_int' is not
needed and will not be emitted [-Wunneeded-internal-declaration]
static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *x) {
^
8 warnings generated.
ccy_classic_lstsqr.c:2040:28: warning: unused function '__Pyx_PyObject_AsString'
[-Wunused-function]
static CYTHON_INLINE char* __Pyx_PyObject_AsString(PyObject* o) {
^
ccy_classic_lstsqr.c:2037:32: warning: unused function
'__Pyx_PyUnicode_FromString' [-Wunused-function]
static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(char* c_str) {
^
ccy_classic_lstsqr.c:2104:26: warning: unused function '__Pyx_PyObject_IsTrue'
[-Wunused-function]
static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject* x) {
^
ccy_classic_lstsqr.c:2159:33: warning: unused function '__Pyx_PyIndex_AsSsize_t'
[-Wunused-function]
static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject* b) {
^
ccy_classic_lstsqr.c:2188:33: warning: unused function '__Pyx_PyInt_FromSize_t'
[-Wunused-function]
static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t ival) {
^
ccy_classic_lstsqr.c:1584:32: warning: unused function '__Pyx_PyInt_From_long'
[-Wunused-function]
static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value) {
^
ccy_classic_lstsqr.c:1631:27: warning: function '__Pyx_PyInt_As_long' is not
needed and will not be emitted [-Wunneeded-internal-declaration]
static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *x) {
^
ccy_classic_lstsqr.c:1731:26: warning: function '__Pyx_PyInt_As_int' is not
needed and will not be emitted [-Wunneeded-internal-declaration]
static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *x) {
^
8 warnings generated.
/usr/bin/clang -bundle -undefined dynamic_lookup -arch i386 -arch x86_64 -g build/temp.macosx-10.6-intel-3.4/ccy_classic_lstsqr.o -o /Users/sebastian/Github/python_reference/benchmarks/ccy_classic_lstsqr.so


#### 4. Importing and running the code¶

In :
import ccy_classic_lstsqr

%timeit py_classic_lstsqr(x, y)
%timeit cy_classic_lstsqr(x, y)
%timeit ccy_classic_lstsqr.ccy_classic_lstsqr(x, y)

100 loops, best of 3: 2.9 ms per loop
1000 loops, best of 3: 212 µs per loop
1000 loops, best of 3: 207 µs per loop