Cleaning Data with PySpark
Mike Metzger
Data Engineering Consultant
DataFrames:
# Return rows where name starts with "M" voter_df.filter(voter_df.name.like('M%'))
# Return name and position only voters = voter_df.select('name', 'position')
voter_df.filter(voter_df.date > '1/1/2019') # or voter_df.where(...)
voter_df.select(voter_df.name)
voter_df.withColumn('year', voter_df.date.year)
voter_df.drop('unused_column')
~
voter_df.filter(voter_df['name'].isNotNull())
voter_df.filter(voter_df.date.year > 1800)
voter_df.where(voter_df['_c0'].contains('VOTE'))
voter_df.where(~ voter_df._c1.isNull())
import pyspark.sql.functions as F
voter_df.withColumn('upper', F.upper('name'))
voter_df.withColumn('splits', F.split('name', ' '))
voter_df.withColumn('year', voter_df['_c4'].cast(IntegerType()))
Various utility functions / transformations to interact with ArrayType()
.size(<column>)
- returns length of arrayType() column
.getItem(<index>)
- used to retrieve a specific item at index of list column.
Cleaning Data with PySpark