Kernel Methods

COMP4670/8600 - Introduction to Statistical Machine Learning - Tutorial 4

Discussion

Get into groups of two or three and take turns explaining the following (about 2 minutes each):

  • regression vs classification
  • Fisher's discriminant
  • generative vs discriminative probabilistic methods
  • logistic regression
  • support vector machines
  • basis functions vs kernels

$\newcommand{\RR}{\mathbb{R}}$ $\newcommand{\dotprod}[2]{\langle #1, #2 \rangle}$

Setting up the environment

In [ ]:
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd

%matplotlib inline

The data set

This is the same dataset we used in Tutorial 2.

We will use an old dataset on the price of housing in Boston (see description). The aim is to predict the median value of the owner occupied homes from various other factors. We will use a normalised version of this data, where each row is an example. The median value of homes is given in the first column (the label) and the value of each subsequent feature has been normalised to be in the range $[-1,1]$. Download this dataset from mldata.org.

Read in the data using pandas. Remove the column containing the binary variable 'CHAS' using drop, which should give you a DataFrame with 506 rows (examples) and 13 columns (1 label and 12 features).

In [ ]:
names = ['medv', 'crim', 'zn', 'indus', 'chas', 'nox', 'rm', 'age', 'dis', 'rad', 'tax', 'ptratio', 'b', 'lstat']
data = pd.read_csv('housing_scale.csv', header=None, names=names)
data.head()
data.drop('chas', axis=1, inplace=True)
data.shape

Constructing new kernels

In the lectures, we saw that certain operations on kernels preserve positive semidefiniteness. Recall that a symmetric matrix $K\in \RR^n \times\RR^n$ is positive semidefinite if for all vectors $a\in\RR^n$ we have the inequality $$ a^T K a \geqslant 0. $$

Prove the following relations:

  1. Given positive semidefinite matrices $K_1$, $K_2$, show that $K_1 + K_2$ is a valid kernel.
  2. Given a positive semidefinite matrix $K$, show that $K^2 = K\cdot K$ is a valid kernel, where the multiplication is a pointwise multiplication (not matrix multiplication).

Solution description

Polynomial kernel using closure

Using the properties proven above, show that the inhomogenous polynomial kernel of degree 2 $$k(x,y) = (\dotprod{x}{y} + 1)^2$$ is positive semidefinite.

Solution description

Empirical comparison

Recall from Tutorial 2 that we could explicitly construct the polynomial basis function. In fact this demonstrates the relation $$ k(x,y) = (\dotprod{x}{y} + 1)^2 = \dotprod{\phi(x)}{\phi(y)}. $$ where $$ \phi(x) = (x_1^2, x_2^2, \ldots, x_n^2, \sqrt{2}x_1 x_2, \ldots, \sqrt{2}x_{n-1} x_n, \sqrt{2}x_1, \ldots, \sqrt{2}x_n, 1) $$ This is sometimes referred to as an explicit feature map or the primal version of a kernel method.

For the data above, construct two kernel matrices, one using the explicit feature map and the second using the equation for the polynomial kernel. Confirm that they are the same.

In [ ]:
# Solution goes here

There are pros and cons for each method of computing the kernel matrix. Discuss.

Solution description

Regularized least squares with kernels

This section is analogous to the part in Tutorial 2 about regularized least squares.

State carefully the cost function and the regulariser, defining all symbols, show that the regularized least squares solution can be expressed as in Lecture 5 and Lecture 9. $$ w = \left( \lambda \mathbf{I} + \Phi^T \Phi\right)^{-1} \Phi t $$ Please describe the reason for each step.

Solution description

By substituting $w = \Phi^T a$, derive the regularized least squares method in terms of the kernel matrix $K$.

Solution description

Comparing solutions in $a$ and $\mathbf{w}$

Implement the kernelized regularized least squares as above. This is often referred to as the dual version of the kernel method.

Compare this with the solution from Tutorial 2. Implement two classes:

  • RLSPrimal
  • RLSDual

each which contain a train and predict function.

Think carefully about the interfaces to the training and test procedures for the two different versions of regularized least squares. Also think about the parameters that need to be stored in the class.

In [ ]:
# Solution goes here

(optional) General kernel

Consider how you would generalise the two classes above if you wanted to have a polynomial kernel of degree 3. For the primal version, assume you have a function that returns the explicit feature map for the kernel feature_map(X) and for the dual version assume you have a function that returns the kernel matrix kernel_matrix(X).