Evaluating multiple models

Supervised Learning with scikit-learn

George Boorman

Core Curriculum Manager, DataCamp

Different models for different problems

Some guiding principles

  • Size of the dataset
    • Fewer features = simpler model, faster training time
    • Some models require large amounts of data to perform well
  • Interpretability
    • Some models are easier to explain, which can be important for stakeholders
    • Linear regression has high interpretability, as we can understand the coefficients
  • Flexibility
    • May improve accuracy, by making fewer assumptions about data
    • KNN is a more flexible model, doesn't assume any linear relationships
Supervised Learning with scikit-learn

It's all in the metrics

  • Regression model performance:

    • RMSE
    • R-squared
  • Classification model performance:

    • Accuracy
    • Confusion matrix
    • Precision, recall, F1-score
    • ROC AUC
  • Train several models and evaluate performance out of the box

Supervised Learning with scikit-learn

A note on scaling

  • Models affected by scaling:
    • KNN
    • Linear Regression (plus Ridge, Lasso)
    • Logistic Regression
    • Artificial Neural Network

 

  • Best to scale our data before evaluating models
Supervised Learning with scikit-learn

Evaluating classification models

import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import cross_val_score, KFold, train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier

X = music.drop("genre", axis=1).values y = music["genre"].values X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
scaler = StandardScaler() X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
Supervised Learning with scikit-learn

Evaluating classification models

models = {"Logistic Regression": LogisticRegression(), "KNN": KNeighborsClassifier(), 
         "Decision Tree": DecisionTreeClassifier()}
results = []

for model in models.values():
kf = KFold(n_splits=6, random_state=42, shuffle=True)
cv_results = cross_val_score(model, X_train_scaled, y_train, cv=kf)
results.append(cv_results)
plt.boxplot(results, labels=models.keys()) plt.show()
Supervised Learning with scikit-learn

Visualizing results

Box plot of accuracy for each model: Logistic Regression, KNN, and Decision Tree

Supervised Learning with scikit-learn

Test set performance

for name, model in models.items():

model.fit(X_train_scaled, y_train)
test_score = model.score(X_test_scaled, y_test)
print("{} Test Set Accuracy: {}".format(name, test_score))
Logistic Regression Test Set Accuracy: 0.844
KNN Test Set Accuracy: 0.82
Decision Tree Test Set Accuracy: 0.832
Supervised Learning with scikit-learn

Let's practice!

Supervised Learning with scikit-learn

Preparing Video For Download...