Tunable parameters in XGBoost

Extreme Gradient Boosting with XGBoost

Sergey Fogelson

Head of Data Science, TelevisaUnivision

Common tree tunable parameters

  • learning rate: learning rate/eta
  • gamma: min loss reduction to create new tree split
  • lambda: L2 reg on leaf weights
  • alpha: L1 reg on leaf weights
  • max_depth: max depth per tree
  • subsample: % samples used per tree
  • colsample_bytree: % features used per tree
Extreme Gradient Boosting with XGBoost

Linear tunable parameters

  • lambda: L2 reg on weights
  • alpha: L1 reg on weights
  • lambda_bias: L2 reg term on bias

  • You can also tune the number of estimators used for both base model types!

Extreme Gradient Boosting with XGBoost

Let's get to some tuning!

Extreme Gradient Boosting with XGBoost

Preparing Video For Download...