By the end of this section you will
fit(...)
methodpredict(...)
methodHere we'll do a short example of a regression problem: learning a continuous value from a set of features.
We'll use the simple Boston house prices set, available in scikit-learn. This records measurements of 13 attributes of housing markets around Boston, as well as the median price. The question is: can you predict the price of a new market given its attributes?
First we'll load the dataset:
from sklearn.datasets import load_boston
data = load_boston()
print data.keys()
['data', 'feature_names', 'DESCR', 'target']
We can see that there are just over 500 data points:
print data.data.shape
print data.target.shape
(506, 13) (506,)
The DESCR
variable has a long description of the dataset:
print data.DESCR
Boston House Prices dataset Notes ------ Data Set Characteristics: :Number of Instances: 506 :Number of Attributes: 13 numeric/categorical predictive :Median Value (attribute 14) is usually the target :Attribute Information (in order): - CRIM per capita crime rate by town - ZN proportion of residential land zoned for lots over 25,000 sq.ft. - INDUS proportion of non-retail business acres per town - CHAS Charles River dummy variable (= 1 if tract bounds river; 0 otherwise) - NOX nitric oxides concentration (parts per 10 million) - RM average number of rooms per dwelling - AGE proportion of owner-occupied units built prior to 1940 - DIS weighted distances to five Boston employment centres - RAD index of accessibility to radial highways - TAX full-value property-tax rate per $10,000 - PTRATIO pupil-teacher ratio by town - B 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town - LSTAT % lower status of the population - MEDV Median value of owner-occupied homes in $1000's :Missing Attribute Values: None :Creator: Harrison, D. and Rubinfeld, D.L. This is a copy of UCI ML housing dataset. http://archive.ics.uci.edu/ml/datasets/Housing This dataset was taken from the StatLib library which is maintained at Carnegie Mellon University. The Boston house-price data of Harrison, D. and Rubinfeld, D.L. 'Hedonic prices and the demand for clean air', J. Environ. Economics & Management, vol.5, 81-102, 1978. Used in Belsley, Kuh & Welsch, 'Regression diagnostics ...', Wiley, 1980. N.B. Various transformations are used in the table on pages 244-261 of the latter. The Boston house-price data has been used in many machine learning papers that address regression problems. **References** - Belsley, Kuh & Welsch, 'Regression diagnostics: Identifying Influential Data and Sources of Collinearity', Wiley, 1980. 244-261. - Quinlan,R. (1993). Combining Instance-Based and Model-Based Learning. In Proceedings on the Tenth International Conference of Machine Learning, 236-243, University of Massachusetts, Amherst. Morgan Kaufmann. - many more! (see http://archive.ics.uci.edu/ml/datasets/Housing)
It often helps to quickly visualize pieces of the data using histograms, scatter plots, or other plot types. Here we'll load pylab and show a histogram of the target values: the median price in each neighborhood.
%pylab inline
Populating the interactive namespace from numpy and matplotlib
plt.hist(data.target)
plt.xlabel('price ($1000s)')
plt.ylabel('count')
plt.show()
Quick Exercise: Try some scatter plots of the features versus the target.
Are there any features that seem to have a strong correlation with the target value? Any that don't?
Remember, you can get at the data columns using:
column_i = data.data[:, i]
n_chns = 6
fig, axs = plt.subplots(1, n_chns, sharey=True, figsize=(24,8))
for ii in range(0,n_chns):
axs[ii].scatter (data.data[:,ii],data.target)
axs[ii].set_title('Median value (Y) vs {0} (X)'.format(data.feature_names[ii]),fontsize=16)
plt.show()
This is a manual version of a technique called feature selection.
Sometimes, in Machine Learning it is useful to use feature selection to decide which features are most useful for a particular problem. Automated methods exist which quantify this sort of exercise of choosing the most informative features. We won't cover feature selection in this tutorial, but you can read about it elsewhere.
Now we'll use scikit-learn
to perform a simple linear regression
on the housing data. There are many possibilities of regressors to
use. A particularly simple one is LinearRegression
: this is
basically a wrapper around an ordinary least squares calculation.
We'll set it up like this:
from sklearn.linear_model import LinearRegression
clf = LinearRegression()
clf.fit(data.data, data.target)
LinearRegression(copy_X=True, fit_intercept=True, normalize=False)
predicted = clf.predict(data.data)
plt.figure(figsize=(10,8))
plt.scatter(data.target, predicted)
plt.plot([0, 50], [0, 50], '--k', linewidth=3)
plt.axis('tight')
plt.xlabel('True price ($1000s)',fontsize=15)
plt.ylabel('Predicted price ($1000s)', fontsize=15)
plt.tick_params(axis='both', which='major', labelsize=15)
plt.grid('on')
plt.show()
# What is the correlation between the observations and the property price?
# The coefficients (shows the correlation between the property price and property related information)
print('Attribute information and Model coefficients:')
for ii in range(0,len(clf.coef_)):
print ('{0}: {1:.3f}'.format(data.feature_names[ii],clf.coef_[ii]))
# The mean square error
print("\nResidual sum of squares: %.2f" % np.mean((predicted - data.target) ** 2))
print("Residual mean deviation: %.2f (K USD)" % np.sqrt(np.mean((predicted - data.target) ** 2)))
Attribute information and Model coefficients: CRIM: -0.107 ZN: 0.046 INDUS: 0.021 CHAS: 2.689 NOX: -17.796 RM: 3.805 AGE: 0.001 DIS: -1.476 RAD: 0.306 TAX: -0.012 PTRATIO: -0.953 B: 0.009 LSTAT: -0.525 Residual sum of squares: 21.90 Residual mean deviation: 4.68 (K USD)
The prediction at least correlates with the true price, though there are clearly some biases. We could imagine evaluating the performance of the regressor by, say, computing the RMS residuals between the true and predicted price.
There are many other types of regressors available in scikit-learn: we'll try one more here.
Use the DecisionTreeRegressor class to fit the housing data.
You can copy and paste some of the above code, replacing LinearRegression
with DecisionTreeRegressor
.
from sklearn.tree import DecisionTreeRegressor
# Instantiate the model, fit the results, and scatter in vs. out
clf = DecisionTreeRegressor()
clf.fit(data.data, data.target)
predicted = clf.predict(data.data)
plt.figure(figsize=(10,8))
plt.scatter(data.target, predicted)
plt.plot([0, 50], [0, 50], '--k', linewidth=3)
plt.axis('tight')
plt.xlabel('True price ($1000s)',fontsize=15)
plt.ylabel('Predicted price ($1000s)', fontsize=15)
plt.tick_params(axis='both', which='major', labelsize=15)
plt.grid('on')
plt.show()
Do you see anything surprising in the results?
The Decision Tree classifier is an example of an instance-based algorithm. Rather than try to determine a model that best fits the data, an instance-based algorithm in some way matches unknown data to the known catalog of training points.
How does this fact explain the results you saw here?
# x from 0 to 10
x = 30 * np.random.random(40)
# y = a*x + b with noise
y = 0.5 * x + 5.0 + np.random.normal(size=x.shape) * 3
# create a linear regression classifier
clf = LinearRegression()
clf.fit(x[:, None], y)
# predict y from the data
x_new = np.linspace(20, 50, 10)
y_new = clf.predict(x_new[:, None])
# plot the results
fig = plt.figure(figsize=(10,8))
ax = fig.add_subplot(111);
ax.scatter(x, y, label='model fitting data')
ax.scatter(x_new, y_new, color='r', label='new observations & predictions')
ax.set_xlabel('x',fontsize=15)
ax.set_ylabel('y',fontsize=15)
ax.legend(loc='upper left', fontsize=15)
ax.tick_params(axis='both', which='major', labelsize=15)
ax.axis('tight')
ax.grid('on')