In previous sections, we took the approach of using Scikit-learn as a black box. We now review how to tune the parameters of the model to make more accurate predictions.
import pandas as pd
import numpy as np
df = pd.read_csv('data.csv')
df_train = df.iloc[:712, :]
df_test = df.iloc[712:, :]
df_train = df_train.drop(['Name', 'Ticket', 'Cabin'], axis=1)
age_mean = df_train['Age'].mean()
df_train['Age'] = df_train['Age'].fillna(age_mean)
df_train['Embarked'] = df_train['Embarked'].fillna('S')
df_train['Sex'] = df_train['Sex'].map({'female': 0, 'male': 1})
df_train = pd.concat([df_train, pd.get_dummies(df_train['Embarked'], prefix='Embarked')], axis=1)
df_train = df_train.drop(['Embarked'], axis=1)
X_train = df_train.iloc[:, 2:].values
y_train = df_train['Survived']
The documentation for the Random Forest Classifier details the different input parameters of the model. These input parameters include the number of trees, and the number of branches each tree has. It is unclear, off-the-bat, which values would be optimal.
http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html
GridSearchCV allows us to test the desired range of input parameters, and review the performance of each set of values on a cross-validation basis. Here we review the number of features considered at each step a branch is made (max_features: 50% or 100% of features) and the maximum number of branches (max_depth: 5 levels or no limitations).
from sklearn.ensemble import RandomForestClassifier
from sklearn.grid_search import GridSearchCV
parameter_grid = {
'max_features': [0.5, 1.],
'max_depth': [5., None]
}
grid_search = GridSearchCV(RandomForestClassifier(n_estimators = 100), parameter_grid, cv=5, verbose=3)
grid_search.fit(X_train, y_train)
Fitting 5 folds for each of 4 candidates, totalling 20 fits [CV] max_features=0.5, max_depth=5.0 ................................. [CV] ........ max_features=0.5, max_depth=5.0, score=0.818182 - 0.2s [CV] max_features=0.5, max_depth=5.0 ................................. [CV] ........ max_features=0.5, max_depth=5.0, score=0.797203 - 0.2s [CV] max_features=0.5, max_depth=5.0 ................................. [CV] ........ max_features=0.5, max_depth=5.0, score=0.888112 - 0.2s [CV] max_features=0.5, max_depth=5.0 ................................. [CV] ........ max_features=0.5, max_depth=5.0, score=0.823944 - 0.2s [CV] max_features=0.5, max_depth=5.0 ................................. [CV] ........ max_features=0.5, max_depth=5.0, score=0.773050 - 0.2s [CV] max_features=1.0, max_depth=5.0 ................................. [CV] ........ max_features=1.0, max_depth=5.0, score=0.797203 - 0.2s [CV] max_features=1.0, max_depth=5.0 ................................. [CV] ........ max_features=1.0, max_depth=5.0, score=0.818182 - 0.2s [CV] max_features=1.0, max_depth=5.0 ................................. [CV] ........ max_features=1.0, max_depth=5.0, score=0.874126 - 0.2s [CV] max_features=1.0, max_depth=5.0 ................................. [CV] ........ max_features=1.0, max_depth=5.0, score=0.830986 - 0.2s [CV] max_features=1.0, max_depth=5.0 ................................. [CV] ........ max_features=1.0, max_depth=5.0, score=0.794326 - 0.2s [CV] max_features=0.5, max_depth=None ................................ [CV] ....... max_features=0.5, max_depth=None, score=0.783217 - 0.2s [CV] max_features=0.5, max_depth=None ................................ [CV] ....... max_features=0.5, max_depth=None, score=0.776224 - 0.2s [CV] max_features=0.5, max_depth=None ................................ [CV] ....... max_features=0.5, max_depth=None, score=0.860140 - 0.2s [CV] max_features=0.5, max_depth=None ................................ [CV] ....... max_features=0.5, max_depth=None, score=0.788732 - 0.2s [CV] max_features=0.5, max_depth=None ................................ [CV] ....... max_features=0.5, max_depth=None, score=0.794326 - 0.2s [CV] max_features=1.0, max_depth=None ................................ [CV] ....... max_features=1.0, max_depth=None, score=0.783217 - 0.2s [CV] max_features=1.0, max_depth=None ................................ [CV] ....... max_features=1.0, max_depth=None, score=0.797203 - 0.2s [CV] max_features=1.0, max_depth=None ................................ [CV] ....... max_features=1.0, max_depth=None, score=0.881119 - 0.2s [CV] max_features=1.0, max_depth=None ................................ [CV] ....... max_features=1.0, max_depth=None, score=0.788732 - 0.2s [CV] max_features=1.0, max_depth=None ................................ [CV] ....... max_features=1.0, max_depth=None, score=0.765957 - 0.2s
[Parallel(n_jobs=1)]: Done 20 out of 20 | elapsed: 3.8s finished
GridSearchCV(cv=5, error_score='raise', estimator=RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini', max_depth=None, max_features='auto', max_leaf_nodes=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=100, n_jobs=1, oob_score=False, random_state=None, verbose=0, warm_start=False), fit_params={}, iid=True, n_jobs=1, param_grid={'max_features': [0.5, 1.0], 'max_depth': [5.0, None]}, pre_dispatch='2*n_jobs', refit=True, scoring=None, verbose=3)
We now review the results.
grid_search.grid_scores_
[mean: 0.82022, std: 0.03842, params: {'max_features': 0.5, 'max_depth': 5.0}, mean: 0.82303, std: 0.02894, params: {'max_features': 1.0, 'max_depth': 5.0}, mean: 0.80056, std: 0.03040, params: {'max_features': 0.5, 'max_depth': None}, mean: 0.80337, std: 0.04026, params: {'max_features': 1.0, 'max_depth': None}]
We now review the best-performing tuning parameters.
grid_search.best_params_
{'max_depth': 5.0, 'max_features': 1.0}
We then set these tuning parameters to our model.
model = RandomForestClassifier(n_estimators = 100, max_features=1.0, max_depth=5.0, random_state=0)
model = model.fit(X_train, y_train)
Exercise
df_test = df_test.drop(['Name', 'Ticket', 'Cabin'], axis=1)
df_test['Age'] = df_test['Age'].fillna(age_mean)
df_test['Embarked'] = df_test['Embarked'].fillna('S')
df_test['Sex'] = df_test['Sex'].map({'female': 0, 'male': 1}).astype(int)
df_test = pd.concat([df_test, pd.get_dummies(df_test['Embarked'], prefix='Embarked')],axis=1)
df_test = df_test.drop(['Embarked'], axis=1)
X_test = df_test.iloc[:, 2:]
y_test = df_test['Survived']
y_prediction = model.predict(X_test)
np.sum(y_prediction == y_test) / float(len(y_test))
0.86033519553072624