Supervised Learning with scikit-learn
George Boorman
Core Curriculum Manager, DataCamp
Recall: Linear regression minimizes a loss function
It chooses a coefficient, $a$, for each feature variable, plus $b$
Large coefficients can lead to overfitting
Regularization: Penalize large coefficients
Loss function = OLS loss function + $$ \alpha * \sum_{i=1}^{n} {a_i}^2$$
Ridge penalizes large positive or negative coefficients
$\alpha$: parameter we need to choose
Picking $\alpha$ is similar to picking k
in KNN
Hyperparameter: variable used to optimize model parameters
$\alpha$ controls model complexity
$\alpha$ = 0 = OLS (Can lead to overfitting)
Very high $\alpha$: Can lead to underfitting
from sklearn.linear_model import Ridge
scores = [] for alpha in [0.1, 1.0, 10.0, 100.0, 1000.0]:
ridge = Ridge(alpha=alpha)
ridge.fit(X_train, y_train) y_pred = ridge.predict(X_test)
scores.append(ridge.score(X_test, y_test))
print(scores)
[0.2828466623222221, 0.28320633574804777, 0.2853000732200006,
0.26423984812668133, 0.19292424694100963]
from sklearn.linear_model import Lasso
scores = [] for alpha in [0.01, 1.0, 10.0, 20.0, 50.0]: lasso = Lasso(alpha=alpha) lasso.fit(X_train, y_train) lasso_pred = lasso.predict(X_test) scores.append(lasso.score(X_test, y_test)) print(scores)
[0.99991649071123, 0.99961700284223, 0.93882227671069, 0.74855318676232, -0.05741034640016]
Lasso can select important features of a dataset
Shrinks the coefficients of less important features to zero
Features not shrunk to zero are selected by lasso
from sklearn.linear_model import Lasso
X = diabetes_df.drop("glucose", axis=1).values y = diabetes_df["glucose"].values names = diabetes_df.drop("glucose", axis=1).columns
lasso = Lasso(alpha=0.1)
lasso_coef = lasso.fit(X, y).coef_
plt.bar(names, lasso_coef) plt.xticks(rotation=45) plt.show()
Supervised Learning with scikit-learn