Cleaning Data with PySpark
Mike Metzger
Data Engineering Consultant
Reading Parquet files
df = spark.read.format('parquet').load('filename.parquet')
df = spark.read.parquet('filename.parquet')
Writing Parquet files
df.write.format('parquet').save('filename.parquet')
df.write.parquet('filename.parquet')
Parquet as backing stores for SparkSQL operations
flight_df = spark.read.parquet('flights.parquet')
flight_df.createOrReplaceTempView('flights')
short_flights_df = spark.sql('SELECT * FROM flights WHERE flightduration < 100')
Cleaning Data with PySpark