By the end of this section you will
fit(...)
methodpredict(...)
methodIn this example we will perform classification of the iris data with several different classifiers.
First we'll load the iris data as we did before:
from sklearn.datasets import load_iris
iris = load_iris()
In the iris dataset example, suppose we are assigned the task to guess the class of an individual flower given the measurements of petals and sepals. This is a classification task, hence we have:
X = iris.data
y = iris.target
print X.shape
print y.shape
Once the data has this format it is trivial to train a classifier, for instance a support vector machine with a linear kernel:
from sklearn.svm import LinearSVC
LinearSVC
is an example of a scikit-learn classifier. If you're curious about how it is used, you can use ipython
's "?"
magic function to see the documentation:
The first thing to do is to create an instance of the classifier. This can be done simply by calling the class name, with any arguments that the object accepts:
clf = LinearSVC(loss = 'l2')
clf
is a statistical model that has parameters that control the learning algorithm (those parameters are sometimes called the hyperparameters). Those hyperparameters can be supplied by the user in the constructor of the model. We will explain later how to choose a good combination using either simple empirical rules or data driven selection:
print clf
By default the model parameters are not initialized. They will be tuned automatically from the data by calling the fit
method with the data X
and labels y
:
clf = clf.fit(X, y)
We can now see some of the fit parameters within the classifier object.
In scikit-learn, parameters defined by training have a trailing underscore.
clf.coef_
clf.intercept_
Once the model is trained, it can be used to predict the most likely outcome on unseen data. For instance let us define a list of simple sample that looks like the first sample of the iris dataset:
X_new = [[ 5.0, 3.6, 1.3, 0.25]]
clf.predict(X_new)
All classification tasks involve predicting an unknown category based on observed features.
Some examples of interested classification tasks:
Now we'll take a few minutes and try out another learning model. Because of scikit-learn
's uniform interface, the syntax is identical to that of LinearSVC
above.
There are many possibilities of classifiers; you could try any of the methods discussed at http://scikit-learn.org/stable/supervised_learning.html. Alternatively, you can explore what's available in scikit-learn
using just the tab-completion feature. For example, import the linear_model
submodule:
from sklearn import linear_model
And use the tab completion to find what's available. Type linear_model.
and then the tab key to see an interactive list of the functions within this submodule. The ones which begin with capital letters are the models which are available.
Now select a new classifier and try out a classification of the iris data.
Some good choices are
sklearn.naive_bayes.GaussianNB
:
Gaussian Naive Bayes model. This is an unsophisticated model which can be trained very quickly.
It is often used to obtain baseline results before moving to a more sophisticated classifier.
sklearn.svm.LinearSVC
:
Support Vector Machines without kernels based on liblinear
sklearn.svm.SVC
:
Support Vector Machines with kernels based on libsvm
sklearn.linear_model.LogisticRegression
:
Regularized Logistic Regression based on liblinear
sklearn.linear_model.SGDClassifier
:
Regularized linear models (SVM or logistic regression) using a Stochastic Gradient Descent algorithm written in Cython
sklearn.neighbors.NeighborsClassifier
:
k-Nearest Neighbors classifier based on the ball tree datastructure for low dimensional data and brute force search for high dimensional data
sklearn.tree.DecisionTreeClassifier
:
A classifier based on a series of binary decisions. This is another very fast classifier, which can be very powerful.
Choose one of the above, import it, and use the ?
feature to learn about it.
Now instantiate this model as we did with LinearSVC
above.
Now use our data X
and y
to train the model, using the fit(...)
method
Now call the predict
method, and find the classification of X_new
.
Some models have additional prediction modes. For example, if clf
is a LogisticRegression
classifier, then it is possible to do a probibilistic prediction for any point. This can be done through the predict_proba
function:
from sklearn.linear_model import LogisticRegression
clf2 = LogisticRegression()
clf2.fit(X, y)
print clf2.predict_proba(X_new)
The result gives the probability (between zero and one) that the test point comes from any of the three classes.
This means that the model estimates that the sample in X_new has:
target = 0
)target = 1
)target = 2
)Of course, the predict method that outputs the label id of the most likely outcome is also available:
clf2.predict(X_new)
Predicting a new value is nice, but how do we guage how well we've done? We'll explore this in more depth later, but here's a quick taste now.
Let's get a rough evaluation our model by using it to predict the values of the training data:
y_model = clf2.predict(X)
print y_model == y
We see that most of the predictions are correct!
Be careful, though: what we've done here is not a very good model evaluation scheme. In a later section we'll introduce a set of techniques called Cross-validation, which treats model evaluation a little bit more carefully.