Parallel Programming with Dask in Python
James Fulton
Climate Informatics Researcher
# Load tabular dataset
import dask.dataframe as dd
dask_df = dd.read_parquet("dataset_parquet")
X = dask_df[['feature1', 'feature2', 'feature3']]
y = dask_df['target_column']
from dask_ml.preprocessing import StandardScaler scaler = StandardScaler()
scaler.fit(X) # This is not lazy
standardized_X = scaler.transform(X) # This is lazy
from dask_ml.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, shuffle=True, test_size=0.2)
print(X_train)
Dask DataFrame Structure:
feature1 feature2 feature3
npartitions=7
int64 float64 float64
... ... ...
# Test the fit model on training data train_score = dask_model.score(X_train, y_train) # Not lazy
print(train_score)
-0.12321
# Test the fit model on testing data test_score = dask_model.score(X_test, y_test) # Not lazy
print(test_score)
-0.23453
Parallel Programming with Dask in Python