By the end of this section you will
fit(...)
methodpredict(...)
methodHere we'll do a short example of a regression problem: learning a continuous value from a set of features.
We'll use the simple Boston house prices set, available in scikit-learn. This records measurements of 13 attributes of housing markets around Boston, as well as the median price. The question is: can you predict the price of a new market given its attributes?
First we'll load the dataset:
from sklearn.datasets import load_boston
data = load_boston()
print data.keys()
We can see that there are just over 500 data points:
print data.data.shape
print data.target.shape
The DESCR
variable has a long description of the dataset:
print data.DESCR
It often helps to quickly visualize pieces of the data using histograms, scatter plots, or other plot types. Here we'll load pylab and show a histogram of the target values: the median price in each neighborhood.
%pylab inline
plt.hist(data.target)
plt.xlabel('price ($1000s)')
plt.ylabel('count')
Quick Exercise: Try some scatter plots of the features versus the target.
Are there any features that seem to have a strong correlation with the target value? Any that don't?
Remember, you can get at the data columns using:
column_i = data.data[:, i]
This is a manual version of a technique called feature selection.
Sometimes, in Machine Learning it is useful to use feature selection to decide which features are most useful for a particular problem. Automated methods exist which quantify this sort of exercise of choosing the most informative features. We won't cover feature selection in this tutorial, but you can read about it elsewhere.
Now we'll use scikit-learn
to perform a simple linear regression
on the housing data. There are many possibilities of regressors to
use. A particularly simple one is LinearRegression
: this is
basically a wrapper around an ordinary least squares calculation.
We'll set it up like this:
from sklearn.linear_model import LinearRegression
clf = LinearRegression()
clf.fit(data.data, data.target)
predicted = clf.predict(data.data)
plt.scatter(data.target, predicted)
plt.plot([0, 50], [0, 50], '--k')
plt.axis('tight')
plt.xlabel('True price ($1000s)')
plt.ylabel('Predicted price ($1000s)')
The prediction at least correlates with the true price, though there are clearly some biases. We could imagine evaluating the performance of the regressor by, say, computing the RMS residuals between the true and predicted price. There are some subtleties in this, however, which we'll cover in a later section.
There are many examples of regression-type problems in machine learning
And much, much more.
There are many other types of regressors available in scikit-learn: we'll try one more here.
Use the DecisionTreeRegressor class to fit the housing data.
You can copy and paste some of the above code, replacing LinearRegression
with DecisionTreeRegressor
.
from sklearn.tree import DecisionTreeRegressor
# Instantiate the model, fit the results, and scatter in vs. out
Do you see anything surprising in the results?
The Decision Tree classifier is an example of an instance-based algorithm. Rather than try to determine a model that best fits the data, an instance-based algorithm in some way matches unknown data to the known catalog of training points.
How does this fact explain the results you saw here?
We'll return to the subject of Decision trees at a later point in the tutorial.