Grid Search

Machine Learning with PySpark

Andrew Collier

Data Scientist, Fathom Data

Choosing an optimal parameter value

Machine Learning with PySpark

Cars revisited (again)

cars.select('mass', 'cyl', 'consumption').show(5)
+------+---+-----------+
|  mass|cyl|consumption|
+------+---+-----------+
|1451.0|  6|       9.05|
|1129.0|  4|       6.53|
|1399.0|  4|       7.84|
|1147.0|  4|       7.84|
|1111.0|  4|       9.05|
+------+---+-----------+
Machine Learning with PySpark

Fuel consumption with intercept

Linear regression with an intercept. Fit to training data.

regression = LinearRegression(labelCol='consumption', fitIntercept=True)
regression = regression.fit(cars_train)

Calculate the RMSE on the testing data.

evaluator.evaluate(regression.transform(cars_test))
# RMSE for model with an intercept
0.745974203928479
Machine Learning with PySpark

Fuel consumption without intercept

Linear regression without an intercept. Fit to training data.

regression = LinearRegression(labelCol='consumption', fitIntercept=False)
regression = regression.fit(cars_train)

Calculate the RMSE on the testing data.

# RMSE for model without an intercept (second model)
0.852819012439
# RMSE for model with an intercept    (first model)
0.745974203928
Machine Learning with PySpark

Parameter grid

from pyspark.ml.tuning import ParamGridBuilder

# Create a parameter grid builder
params = ParamGridBuilder()

# Add grid points params = params.addGrid(regression.fitIntercept, [True, False])
# Construct the grid params = params.build()
# How many models? print('Number of models to be tested: ', len(params))
Number of models to be tested:  2
Machine Learning with PySpark

Grid search with cross-validation

Create a cross-validator and fit to the training data.

cv = CrossValidator(estimator=regression,
                    estimatorParamMaps=params,
                    evaluator=evaluator)
cv = cv.setNumFolds(10).setSeed(13).fit(cars_train)

What's the cross-validated RMSE for each model?

cv.avgMetrics
[0.800663722151, 0.907977823182]
Machine Learning with PySpark

The best model & parameters

# Access the best model
cv.bestModel

Or just use the cross-validator object.

predictions = cv.transform(cars_test)

Retrieve the best parameter.

cv.bestModel.explainParam('fitIntercept')
'fitIntercept: whether to fit an intercept term (default: True, current: True)'
Machine Learning with PySpark

A more complicated grid

params = ParamGridBuilder() \
            .addGrid(regression.fitIntercept, [True, False]) \

.addGrid(regression.regParam, [0.001, 0.01, 0.1, 1, 10]) \
.addGrid(regression.elasticNetParam, [0, 0.25, 0.5, 0.75, 1]) \ .build()

How many models now?

print ('Number of models to be tested: ', len(params))
Number of models to be tested:  50
Machine Learning with PySpark

Find the best parameters!

Machine Learning with PySpark

Preparing Video For Download...