%matplotlib inline
In this notebook we'll visualize the weight vector as a function of the L2 regularization parameter to demonstrate its effect. we'll use ridge regression to solve a classification problem:
from sklearn.linear_model import Ridge
from sklearn.datasets import load_breast_cancer
data = load_breast_cancer()
X = data.data
y = data.target
X.shape
classifier = Ridge()
coefs = []
errors = []
alphas = np.logspace(-5, 2, 40)
# Train the model with different values of the regularization parameter:
for a in alphas:
_ = classifier.set_params(alpha=a)
_ = classifier.fit(X, y);
coefs.append(classifier.coef_)
(569, 30)
from matplotlib import pyplot as plt
plt.figure()
ax = plt.gca();
ax.plot(alphas, coefs);
ax.set_xscale('log');
plt.xlabel('alpha');
plt.ylabel('weights');
plt.title('weight vector as a function of the regularization parameter');
plt.axis('tight');
Some observations:
As alpha increases (strong regularization), the coefficients of the trained weight vector decrease, eventually converging to 0 leading to a simpler (and biased) solution.