Classification

COMP4670/8600 - Introduction to Statistical Machine Learning - Tutorial 3

$\newcommand{\trace}[1]{\operatorname{tr}\left\{#1\right\}}$ $\newcommand{\Norm}[1]{\lVert#1\rVert}$ $\newcommand{\RR}{\mathbb{R}}$ $\newcommand{\inner}[2]{\langle #1, #2 \rangle}$ $\newcommand{\DD}{\mathscr{D}}$ $\newcommand{\grad}[1]{\operatorname{grad}#1}$ $\DeclareMathOperator*{\argmin}{arg\,min}$

Setting up the environment

In [ ]:
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as opt

%matplotlib inline

The data set

We will predict the incidence of diabetes based on various measurements (see description). Instead of directly using the raw data, we use a normalised version, where the label to be predicted (the incidence of diabetes) is in the first column. Download the data from the course website.

Read in the data using np.loadtxt.

In [ ]:
# Solution goes here

Classification via Logistic Regression

Implement binary classification using logistic regression for a data set with two classes. Make sure you use appropriate python style and docstrings.

Use scipy.optimize.fmin_bfgs to optimise your cost function. fmin_bfgs requires the cost function to be optimised, and the gradient of this cost function. Implement these two functions as cost and grad by following the equations in the lectures.

Implement the function train that takes a matrix of examples, and a vector of labels, and returns the maximum likelihood weight vector for logistic regresssion. Also implement a function test that takes this maximum likelihood weight vector and the a matrix of examples, and returns the predictions. See the section Putting everything together below for expected usage.

We add an extra column of ones to represent the constant basis.

In [ ]:
data = np.hstack([data, np.ones((data.shape[0], 1))]) # add a column of ones
data[:5,:]
In [ ]:
# Solution goes here

Performance measure

There are many ways to compute the performance of a binary classifier. The key concept is the idea of a confusion matrix or contingency table:

Label
+1 -1
Prediction +1 TP FP
-1 FN TN

where

  • TP - true positive
  • FP - false positive
  • FN - false negative
  • TN - true negative

Implement three functions, the first one which returns the confusion matrix for comparing two lists (one set of predictions, and one set of labels). Then implement two functions that take the confusion matrix as input and returns the accuracy and balanced accuracy respectively. Accuracy is defined as the number of correct classifications divided by the total number of examples. The balanced accuracy is the average accuracy of each class, that is the accuracy when the true class is positive and the accuracy when the true class is negative (averaged).

In [ ]:
# Solution goes here

Putting everything together

Consider the following code, which trains on all the examples, and predicts on the training set. Discuss the results.

In [ ]:
y = data[:,0]
X = data[:,1:]
theta_best = train(X, y)
pred = predict(theta_best, X)
cmatrix = confusion_matrix(pred, y)
[accuracy(cmatrix), balanced_accuracy(cmatrix)]

Solution description

Fisher's discriminant

In the lectures, you saw that the Fisher criterion $$ J(w) = \frac{w^T S_B w}{w^T S_W w} $$ is maximum for Fisher's linear discriminant.

Define $S_B$ and $S_W$ as in the lectures and prove this result.

Solution description

Solution description

(optional) Effect of regularization parameter

By splitting the data into two halves, train on one half and report performance on the second half. By repeating this experiment for different values of the regularization parameter $\lambda$ we can get a feeling about the variability in the performance of the classifier due to regularization. Plot the values of accuracy and balanced accuracy for at least 3 different choices of $\lambda$. Note that you may have to update your implementation of logistic regression to include the regularisation parameter.

In [ ]:
### Solution

def split_data(data):
    """Randomly split data into two equal groups"""
    np.random.seed(1)
    N = len(data)
    idx = np.arange(N)
    np.random.shuffle(idx)
    train_idx = idx[:int(N/2)]
    test_idx = idx[int(N/2):]

    X_train = data[train_idx, 1:]
    t_train = data[train_idx, 0]
    X_test = data[test_idx, 1:]
    t_test = data[test_idx, 0]
    
    return X_train, t_train, X_test, t_test

X_train, t_train, X_test, t_test = split_data(data)

Solution description