Advanced DataFrame operations

Introduction to PySpark

Ben Schmidt

Data Engineer

Joins in PySpark

  • Combine rows from two or more DataFrames based on common columns
  • Types of joins: inner, left, right, and outer, like SQL

  • Syntax: DataFrame1.join(DataFrame2, on="column", how="join_type")

# Joining on id column using an inner join
df_joined = df1.join(df2, on="id", how="inner")

# Joining on columns with different names df_joined = df1.join(df2, df1.Id == df2.Name, "inner")
Introduction to PySpark

Union operation

  • Combines rows from two DataFrames with the same schema

  • Syntax: DataFrame1.union(DataFrame2)

# Union of two DataFrames with identical schemas
df_union = df1.union(df2)
Introduction to PySpark

Working with Arrays and Maps

Arrays: Useful for storing lists within columns, syntax: ArrayType(StringType(),False)`

from pyspark.sql.functions import array, struct, lit

# Create an array column
df = df.withColumn("scores", array(lit(85), lit(90), lit(78)))

Maps: Key-value pairs, helpful for dictionary-like data, MapType(StringType(),StringType())

from pyspark.sql.types import StructField, StructType, StringType, MapType

schema = StructType([
    StructField('name', StringType(), True),
    StructField('properties', MapType(StringType(), StringType()), True)
])
Introduction to PySpark

Working with Structs

  • Structs: Create nested structures within rows Syntax: StructType(Structfield, Datatype())
# Create a struct column
df = df.withColumn("name_struct", struct("first_name", "last_name"))

# Create a struct column df = df.withColumn("name_struct", struct("first_name", "last_name"))
Introduction to PySpark

Let's practice!

Introduction to PySpark

Preparing Video For Download...