RDD
s are distributed collections which are materialized on actions only. It is not possible to truncate your RDD
to a fixed size, and still get an RDD
back (hence RDD.take(n)
returns an Array[T]
, just like collect
)
I you want to get similarly sized RDD
s regardless of the input size, you can truncate items in each of your partitions - this way you can better control the absolute number of items in resulting RDD
. Size of the resulting RDD
will depend on spark parallelism.
An example from spark-shell
:
import org.apache.spark.rdd.RDD
val numberOfPartitions = 1000
val millionRdd: RDD[Int] = sc.parallelize(1 to 1000000, numberOfPartitions)
val millionRddTruncated: RDD[Int] = rdd.mapPartitions(_.take(10))
val billionRddTruncated: RDD[Int] = sc.parallelize(1 to 1000000000, numberOfPartitions).mapPartitions(_.take(10))
millionRdd.count // 1000000
millionRddTruncated.count // 10000 = 10 item * 1000 partitions
billionRddTruncated.count // 10000 = 10 item * 1000 partitions