When we split our data into training and test sets, we simply chose the first 80% to be the training set and the remaining 20% to be the test set. However, we would obtain different results if we chose a different split. To get around this, we use cross-validation.
import pandas as pd
import numpy as np
df = pd.read_csv('data.csv')
df = df.drop(['Name', 'Ticket', 'Cabin'], axis=1)
age_mean = df['Age'].mean()
df['Age'] = df['Age'].fillna(age_mean)
df['Embarked'] = df['Embarked'].fillna('S')
df['Sex'] = df['Sex'].map({'female': 0, 'male': 1})
df = pd.concat([df, pd.get_dummies(df['Embarked'], prefix='Embarked')], axis=1)
df = df.drop(['Embarked'], axis=1)
X = df.iloc[:, 2:].values
y = df['Survived'].values
Cross-validation involves splitting the data into five partitions, calculating accuracy once on each split, and then taking the average. We can generate cross-validation folds automatically with Scikit-learn.
from sklearn.cross_validation import KFold
from sklearn.ensemble import RandomForestClassifier
cv = KFold(n=len(y), n_folds=5)
results = []
for training_set, test_set in cv:
X_train = X[training_set]
y_train = y[training_set]
X_test = X[test_set]
y_test = y[test_set]
model = RandomForestClassifier(n_estimators=100)
model.fit(X_train, y_train)
y_prediction = model.predict(X_test)
result = np.sum(y_test == y_prediction)*1./len(y_test)
results.append(result)
print "prediction accuracy:", result
print "overall prediction accuracy:", np.mean(results)
prediction accuracy: 0.776536312849 prediction accuracy: 0.814606741573 prediction accuracy: 0.876404494382 prediction accuracy: 0.76404494382 prediction accuracy: 0.837078651685 overall prediction accuracy: 0.813734228862