Interacting with PySpark DataFrames

Big Data Fundamentals with PySpark

Upendra Devisetty

Science Analyst, CyVerse

DataFrame operators in PySpark

  • DataFrame operations: Transformations and Actions

  • DataFrame Transformations:

    • select(), filter(), groupby(), orderby(), dropDuplicates(), and withColumnRenamed()
  • DataFrame Actions :

    • printSchema(), head(), show(), count(), columns, and describe()

    Correction: printSchema() is a method for any Spark dataset/dataframe and not an action

Big Data Fundamentals with PySpark

select() and show() operations

  • select() transformation subsets the columns in the DataFrame
df_id_age = test.select('Age')
  • show() action prints first 20 rows in the DataFrame
df_id_age.show(3)
+---+
|Age|
+---+
| 17|
| 17|
| 17|
+---+
only showing top 3 rows
Big Data Fundamentals with PySpark

filter() and show() operations

  • filter() transformation filters out the rows based on a condition
new_df_age21 = new_df.filter(new_df.Age > 21)
new_df_age21.show(3)
+-------+------+---+
|User_ID|Gender|Age|
+-------+------+---+
|1000002|     M| 55|
|1000003|     M| 26|
|1000004|     M| 46|
+-------+------+---+
only showing top 3 rows
Big Data Fundamentals with PySpark

groupby() and count() operations

  • groupby() operation can be used to group a variable
test_df_age_group = test_df.groupby('Age')
test_df_age_group.count().show(3)
+---+------+
|Age| count|
+---+------+
| 26|219587|
| 17|     4|
| 55| 21504|
+---+------+
only showing top 3 rows
Big Data Fundamentals with PySpark

orderby() Transformations

  • orderby() operation sorts the DataFrame based on one or more columns
test_df_age_group.count().orderBy('Age').show(3)
+---+-----+
|Age|count|
+---+-----+
|  0|15098|
| 17|    4|
| 18|99660|
+---+-----+
only showing top 3 rows
Big Data Fundamentals with PySpark

dropDuplicates()

  • dropDuplicates() removes the duplicate rows of a DataFrame
test_df_no_dup = test_df.select('User_ID','Gender', 'Age').dropDuplicates()
test_df_no_dup.count()
5892
Big Data Fundamentals with PySpark

withColumnRenamed Transformations

  • withColumnRenamed() renames a column in the DataFrame
test_df_sex = test_df.withColumnRenamed('Gender', 'Sex')
test_df_sex.show(3)
+-------+---+---+
|User_ID|Sex|Age|
+-------+---+---+
|1000001|  F| 17|
|1000001|  F| 17|
|1000001|  F| 17|
+-------+---+---+
Big Data Fundamentals with PySpark

printSchema()

  • printSchema() operation prints the types of columns in the DataFrame
test_df.printSchema()
 |-- User_ID: integer (nullable = true)
 |-- Product_ID: string (nullable = true)
 |-- Gender: string (nullable = true)
 |-- Age: string (nullable = true)
 |-- Occupation: integer (nullable = true)
 |-- Purchase: integer (nullable = true)
Big Data Fundamentals with PySpark

columns actions

  • columns operator prints the columns of a DataFrame
test_df.columns
['User_ID', 'Gender', 'Age']
Big Data Fundamentals with PySpark

describe() actions

  • describe() operation compute summary statistics of numerical columns in the DataFrame
test_df.describe().show()
+-------+------------------+------+------------------+
|summary|           User_ID|Gender|               Age|
+-------+------------------+------+------------------+
|  count|            550068|550068|            550068|
|   mean|1003028.8424013031|  null|30.382052764385495|
| stddev|1727.5915855307312|  null|11.866105189533554|
|    min|           1000001|     F|                 0|
|    max|           1006040|     M|                55|
+-------+------------------+------+------------------+
Big Data Fundamentals with PySpark

Let's practice

Big Data Fundamentals with PySpark

Preparing Video For Download...