Previously, we looked at a simplistic example of how to test the performance of a classifier. Using the iris data set, it looked something like this:
# Get the data
from sklearn.datasets import load_iris
iris = load_iris()
X = iris.data
y = iris.target
# Instantiate and train the classifier
from sklearn.svm import LinearSVC
clf = LinearSVC(loss = 'l2')
clf.fit(X, y)
# Check input vs. output labels
y_pred = clf.predict(X)
print (y_pred == y)
Question: what might be the problem with this approach?
Learning the parameters of a prediction function and testing it on the same data is a methodological mistake: a model that would just repeat the labels of the samples that it has just seen would have a perfect score but would fail to predict anything useful on yet-unseen data.
To avoid over-fitting, we have to define two different sets:
In scikit-learn such a random split can be quickly computed with the
train_test_split
helper function. It can be used this way:
from sklearn import cross_validation
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.25, random_state=0)
print X.shape, X_train.shape, X_test.shape
Now we train on the training data, and test on the testing data:
clf = LinearSVC(loss='l2').fit(X_train, y_train)
y_pred = clf.predict(X_test)
print (y_pred == y_test)
There is an issue here, however: by defining these two sets, we drastically reduce the number of samples which can be used for learning the model, and the results can depend on a particular random choice for the pair of (train, test) sets.
A solution is to split the whole data several consecutive times in different train set and test set, and to return the averaged value of the prediction scores obtained with the different sets. Such a procedure is called cross-validation. This approach can be computationally expensive, but does not waste too much data (as it is the case when fixing an arbitrary test set), which is a major advantage in problem such as inverse inference where the number of samples is very small.
We'll explore cross-validation a bit later, but you can find much more information on cross-validation in scikit-learn here: http://scikit-learn.org/dev/modules/cross_validation.html
The content in this section is adapted from Andrew Ng's excellent Coursera course, available here: https://www.coursera.org/course/ml
The issues associated with validation and cross-validation are some of the most important aspects of the practice of machine learning. Selecting the optimal model for your data is vital, and is a piece of the problem that is not often appreciated by machine learning practitioners.
Of core importance is the following question:
If our estimator is underperforming, how should we move forward?
The answer is often counter-intuitive. In particular, Sometimes using a more complicated model will give worse results. Also, Sometimes adding training data will not improve your results. The ability to determine what steps will improve your model is what separates the successful machine learning practitioners from the unsuccessful.
For this section, we'll work with a simple 1D regression problem. This will help us to
easily visualize the data and the model, and the results generalize easily to higher-dimensional
datasets. We'll explore polynomial regression: the fitting of a polynomial to points.
Though this can be accomplished within scikit-learn (the machinery is in sklearn.linear_model
),
for simplicity we'll use numpy.polyfit
and numpy.polyval
:
%pylab inline
import numpy as np
x = 10 * np.random.random(20)
y = 0.5 * x ** 2 - x + 1
p = np.polyfit(x, y, deg=2)
print p
As we can see, polyfit fits a polynomial to one-dimensional data. We can visualize this to see the result:
x_new = np.linspace(-1, 12, 1000)
y_new = np.polyval(p, x_new)
plt.scatter(x, y)
plt.plot(x_new, y_new)
We've chosen the model to use through the hyperparameter deg
.
A hyperparameter is a parameter that determines the type of
model we use: for example, deg=1
gives a linear model, deg=2
gives a 2nd-order polynomial, etc.
Now, what if the data is not a perfect polynomial? Below, we'll take the above
problem and add a small
amount of Gaussian scatter in y
. Here we'll take the additional step of computing
the RMS error of the resulting model on the input data.
np.random.seed(42)
x = 10 * np.random.random(20)
y = 0.5 * x ** 2 - x + 1 + np.random.normal(0, 2, x.shape)
# ---> Change the degree here
p = np.polyfit(x, y, deg=2)
x_new = np.linspace(0, 10, 100)
y_new = np.polyval(p, x_new)
plt.scatter(x, y)
plt.plot(x_new, y_new)
plt.ylim(-10, 50)
print "RMS error = %.4g" % np.sqrt(np.mean((y - np.polyval(p, x)) ** 2))
What happens to the fit and the RMS error as the degree is increased?
One way to address this issue is to use what are often called Learning Curves.
Given a particular dataset and a model we'd like to fit (e.g. a polynomial), we'd
like to tune our value of the hyperparameter d
to give us the best fit.
We'll imagine we have a simple regression problem: given the size of a house, we'd like to predict how much it's worth. We'll fit it with our polynomial regression model.
Run the following code to see an example plot:
from figures import plot_bias_variance
plot_bias_variance(8)
In the above figure, we see fits for three different values of d
.
For d = 1
, the data is under-fit. This means that the model is too
simplistic: no straight line will ever be a good fit to this data. In
this case, we say that the model suffers from high bias. The model
itself is biased, and this will be reflected in the fact that the data
is poorly fit. At the other extreme, for d = 6
the data is over-fit.
This means that the model has too many free parameters (6 in this case)
which can be adjusted to perfectly fit the training data. If we add a
new point to this plot, though, chances are it will be very far from
the curve representing the degree-6 fit. In this case, we say that the
model suffers from high variance. The reason for this label is that if
any of the input points are varied slightly, it could result in an
extremely different model.
In the middle, for d = 2
, we have found a good mid-point. It fits
the data fairly well, and does not suffer from the bias and variance
problems seen in the figures on either side. What we would like is a
way to quantitatively identify bias and variance, and optimize the
metaparameters (in this case, the polynomial degree d) in order to
determine the best algorithm. This can be done through a process
called cross-validation.
We'll create a dataset like in the example above, and use this to test our validation scheme. First we'll define some utility routines:
def test_func(x, err=0.5):
return np.random.normal(10 - 1. / (x + 0.1), err)
def compute_error(x, y, p):
yfit = np.polyval(p, x)
return np.sqrt(np.mean((y - yfit) ** 2))
from sklearn.cross_validation import train_test_split
N = 200
f_crossval = 0.5
error = 1.0
# randomly sample the data
np.random.seed(1)
x = np.random.random(N)
y = test_func(x, error)
# split into training, validation, and testing sets.
xtrain, xtest, ytrain, ytest = train_test_split(x, y, test_size=f_crossval)
# show the training and cross-validation sets
plt.scatter(xtrain, ytrain, color='red')
plt.scatter(xtest, ytest, color='blue')
In order to quantify the effects of bias and variance and construct the best possible estimator, we will split our training data into a training set and a validation set. As a general rule, the training set should be about 60% of the samples.
The general idea is as follows. The model parameters (in our case, the coefficients of the polynomials) are learned using the training set as above. The error is evaluated on the cross-validation set, and the meta-parameters (in our case, the degree of the polynomial) are adjusted so that this cross-validation error is minimized. Finally, the labels are predicted for the test set. These labels are used to evaluate how well the algorithm can be expected to perform on unlabeled data.
The cross-validation error of our polynomial classifier can be visualized by plotting the error as a function of the polynomial degree d. We can do this as follows:
# suppress warnings from Polyfit
import warnings
warnings.filterwarnings('ignore', message='Polyfit*')
degrees = np.arange(21)
train_err = np.zeros(len(degrees))
validation_err = np.zeros(len(degrees))
for i, d in enumerate(degrees):
p = np.polyfit(xtrain, ytrain, d)
train_err[i] = compute_error(xtrain, ytrain, p)
validation_err[i] = compute_error(xtest, ytest, p)
fig, ax = plt.subplots()
ax.plot(degrees, validation_err, lw=2, label = 'cross-validation error')
ax.plot(degrees, train_err, lw=2, label = 'training error')
ax.plot([0, 20], [error, error], '--k', label='intrinsic error')
ax.legend(loc=0)
ax.set_xlabel('degree of fit')
ax.set_ylabel('rms error')
This figure compactly shows the reason that cross-validation is
important. On the left side of the plot, we have very low-degree
polynomial, which under-fits the data. This leads to a very high
error for both the training set and the cross-validation set. On
the far right side of the plot, we have a very high degree
polynomial, which over-fits the data. This can be seen in the fact
that the training error is very low, while the cross-validation
error is very high. Plotted for comparison is the intrinsic error
(this is the scatter artificially added to the data: click on the
above image to see the source code). For this toy dataset,
error = 1.0 is the best we can hope to attain. Choosing d=6
in
this case gets us very close to the optimal error.
The astute reader will realize that something is amiss here: in
the above plot, d = 6
gives the best results. But in the previous
plot, we found that d = 6
vastly over-fits the data. What’s going
on here? The difference is the number of training points used.
In the previous example, there were only eight training points.
In this example, we have 100. As a general rule of thumb, the more
training points used, the more complicated model can be used.
But how can you determine for a given model whether more training
points will be helpful? A useful diagnostic for this are learning curves.
A learning curve is a plot of the training and cross-validation error as a function of the number of training points. Note that when we train on a small subset of the training data, the training error is computed using this subset, not the full training set. These plots can give a quantitative view into how beneficial it will be to add training samples.
# suppress warnings from Polyfit
import warnings
warnings.filterwarnings('ignore', message='Polyfit*')
def plot_learning_curve(d):
sizes = np.linspace(2, N, 50).astype(int)
train_err = np.zeros(sizes.shape)
crossval_err = np.zeros(sizes.shape)
for i, size in enumerate(sizes):
# Train on only the first `size` points
p = np.polyfit(xtrain[:size], ytrain[:size], d)
# Validation error is on the *entire* validation set
crossval_err[i] = compute_error(xtest, ytest, p)
# Training error is on only the points used for training
train_err[i] = compute_error(xtrain[:size], ytrain[:size], p)
fig, ax = plt.subplots()
ax.plot(sizes, crossval_err, lw=2, label='validation error')
ax.plot(sizes, train_err, lw=2, label='training error')
ax.plot([0, N], [error, error], '--k', label='intrinsic error')
ax.set_xlabel('traning set size')
ax.set_ylabel('rms error')
ax.legend(loc=0)
ax.set_xlim(0, 99)
ax.set_title('d = %i' % d)
Now that we've defined this function, let's plot an example learning curve:
plot_learning_curve(d=1)
Here we show the learning curve for d = 1
. From the above
discussion, we know that d = 1
is a high-bias estimator which
under-fits the data. This is indicated by the fact that both the
training and validation errors are very high. If this is
the case, adding more training data will not help matters: both
lines have converged to a relatively high error.
When the learning curves have converged, we need a more sophisticated model or more features to improve the error.
(equivalently we can decrease regularization, which we won't discuss in this tutorial)
plot_learning_curve(d=20)
plt.ylim(0, 15)
Here we show the learning curve for d = 20
. From the above
discussion, we know that d = 20
is a high-variance estimator
which over-fits the data. This is indicated by the fact that the
training error is much less than the validation error. As
we add more samples to this training set, the training error will
continue to climb, while the cross-validation error will continue
to decrease, until they meet in the middle. In this case, our
intrinsic error was set to 1.0, and we can infer that adding more
data will allow the estimator to very closely match the best
possible cross-validation error.
When the learning curves have not converged, it indicates that the model is too complicated for the amount of data we have. We should either find more training data, or use a simpler model.
(equivalently we can increase regularization, which we won't discuss in this tutorial)
We’ve seen above that an under-performing algorithm can be due to two possible situations: high bias (under-fitting) and high variance (over-fitting). In order to evaluate our algorithm, we set aside a portion of our training data for cross-validation. Using the technique of learning curves, we can train on progressively larger subsets of the data, evaluating the training error and cross-validation error to determine whether our algorithm has high variance or high bias. But what do we do with this information?
If our algorithm shows high bias, the following actions might help:
If our algorithm shows high variance, the following actions might help:
These choices become very important in real-world situations. For example, due to limited telescope time, astronomers must seek a balance between observing a large number of objects, and observing a large number of features for each object. Determining which is more important for a particular learning task can inform the observing strategy that the astronomer employs. In a later exercise, we will explore the use of learning curves for the photometric redshift problem.
There are a lot more options for performing validation and model testing. In particular, there are several schemes for cross-validation, in which the model is fit multiple times with different training and test sets. The details are different, but the principles are the same as what we've seen here.
For more information see the sklearn.cross_validation
module documentation,
and the information on the scikit-learn website.
Using validation schemes to determine hyper-parameters means that we are fitting the hyper-parameters to the particular validation set. In the same way that parameters can be over-fit to the training set, hyperparameters can be over-fit to the validation set. Because of this, the validation error tends to under-predict the classification error of new data.
For this reason, it is recommended to split the data into three sets:
This may seem excessive, and many machine learning practitioners ignore the need for a test set. But if your goal is to predict the error of a model on unknown data, using a test set is vital.