How are stages split into tasks in Spark?
Let's assume for the following that only one Spark job is running at every point in time.
What I get so far
Here is what I understand what happens in Spark:
- When a
SparkContext
is created, each worker node starts an executor. Executors are separate processes (JVM), that connects back to the driver program. Each executor has the jar of the driver program. Quitting a driver, shuts down the executors. Each executor can hold some partitions. - When a job is executed, an execution plan is created according to the lineage graph.
- The execution job is split into stages, where stages containing as many neighbouring (in the lineage graph) transformations and action, but no shuffles. Thus stages are separated by shuffles.
I understand that
- A task is a command sent from the driver to an executor by serializing the Function object.
- The executor deserializes (with the driver jar) the command (task) and executes it on a partition.
but
Question(s)
How do I split the stage into those tasks?
Specifically:
- Are the tasks determined by the transformations and actions or can be multiple transformations/actions be in a task?
- Are the tasks determined by the partition (e.g. one task per per stage per partition).
- Are the tasks determined by the nodes (e.g. one task per stage per node)?
What I think (only partial answer, even if right)
In https://0x0fff.com/spark-architecture-shuffle, the shuffle is explained with the image
and I get the impression that the rule is
each stage is split into #number-of-partitions tasks, with no regard for the number of nodes
For my first image I'd say that I'd have 3 map tasks and 3 reduce tasks.
For the image from 0x0fff, I'd say there are 8 map tasks and 3 reduce tasks (assuming that there are only three orange and three dark green files).
Open questions in any case
Is that correct? But even if that is correct, my questions above are not all answered, because it is still open, whether multiple operations (e.g. multiple maps) are within one task or are separated into one tasks per operation.
What others say
What is a task in Spark? How does the Spark worker execute the jar file? and How does the Apache Spark scheduler split files into tasks? are similar, but I did not feel that my question was answered clearly there.
Solution 1:
You have a pretty nice outline here. To answer your questions
- A separate
task
does need to be launched for each partition of data for eachstage
. Consider that each partition will likely reside on distinct physical locations - e.g. blocks in HDFS or directories/volumes for a local file system.
Note that the submission of Stage
s is driven by the DAG Scheduler
. This means that stages that are not interdependent may be submitted to the cluster for execution in parallel: this maximizes the parallelization capability on the cluster. So if operations in our dataflow can happen simultaneously we will expect to see multiple stages launched.
We can see that in action in the following toy example in which we do the following types of operations:
- load two datasources
- perform some map operation on both of the data sources separately
- join them
- perform some map and filter operations on the result
- save the result
So then how many stages will we end up with?
- 1 stage each for loading the two datasources in parallel = 2 stages
- A third stage representing the
join
that is dependent on the other two stages - Note: all of the follow-on operations working on the joined data may be performed in the same stage because they must happen sequentially. There is no benefit to launching additional stages because they can not start work until the prior operation were completed.
Here is that toy program
val sfi = sc.textFile("/data/blah/input").map{ x => val xi = x.toInt; (xi,xi*xi) }
val sp = sc.parallelize{ (0 until 1000).map{ x => (x,x * x+1) }}
val spj = sfi.join(sp)
val sm = spj.mapPartitions{ iter => iter.map{ case (k,(v1,v2)) => (k, v1+v2) }}
val sf = sm.filter{ case (k,v) => v % 10 == 0 }
sf.saveAsTextFile("/data/blah/out")
And here is the DAG of the result
Now: how many tasks ? The number of tasks should be equal to
Sum of (Stage
* #Partitions in the stage
)
Solution 2:
This might help you better understand different pieces:
- Stage: is a collection of tasks. Same process running against different subsets of data (partitions).
- Task: represents a unit of work on a partition of a distributed dataset. So in each stage, number-of-tasks = number-of-partitions, or as you said "one task per stage per partition”.
- Each executer runs on one yarn container, and each container resides on one node.
- Each stage utilizes multiple executers, each executer is allocated multiple vcores.
- Each vcore can execute exactly one task at a time
- So at any stage, multiple tasks could be executed in parallel. number-of-tasks running = number-of-vcores being used.