Big Data Fundamentals with PySpark
Upendra Devisetty
Science Analyst, CyVerse
Basic RDD Transformations
map()
, filter()
, flatMap()
, and union()
RDD = sc.parallelize([1,2,3,4])
RDD_map = RDD.map(lambda x: x * x)
RDD = sc.parallelize([1,2,3,4])
RDD_filter = RDD.filter(lambda x: x > 2)
RDD = sc.parallelize(["hello world", "how are you"])
RDD_flatmap = RDD.flatMap(lambda x: x.split(" "))
inputRDD = sc.textFile("logs.txt")
errorRDD = inputRDD.filter(lambda x: "error" in x.split())
warningsRDD = inputRDD.filter(lambda x: "warnings" in x.split())
combinedRDD = errorRDD.union(warningsRDD)
They are operations that return a value after running a computation on the RDD
Basic RDD Actions
collect()
take(N)
first()
count()
collect() return all the elements of the dataset as an array
take(N) returns an array with the first N elements of the dataset
RDD_map.collect()
[1, 4, 9, 16]
RDD_map.take(2)
[1, 4]
RDD_map.first()
[1]
RDD_flatmap.count()
5
Big Data Fundamentals with PySpark