Introduction to Spark SQL in Python
Mark Plutowski
Data Scientist
Spark Task is a unit of execution that runs on a single cpu
Spark Stage a group of tasks that perform the same computation in parallel, each task typically running on a different subset of the data
Spark Job is a computation triggered by an action, sliced into one or more stages.
spark.catalog.cacheTable('table1')
spark.catalog.uncacheTable('table1')
spark.catalog.isCached('table1')
spark.catalog.dropTempView('table1')
spark.catalog.listTables()
[Table(name='text', database=None, description=None, tableType='TEMPORARY', isTemporary=True)]
Shows where data partitions exist
query3agg = """ SELECT w1, w2, w3, COUNT(*) as count FROM ( SELECT word AS w1, LEAD(word,1) OVER(PARTITION BY part ORDER BY id ) AS w2, LEAD(word,2) OVER(PARTITION BY part ORDER BY id ) AS w3 FROM df ) GROUP BY w1, w2, w3 ORDER BY count DESC """
spark.sql(query3agg).show()
Introduction to Spark SQL in Python