Spark losing println() on stdout
Solution 1:
This is a conceptual question...
Imagine You have a big cluster, composed of many workers let's say n
workers and those workers store a partition of an RDD
or DataFrame
, imagine You start a map
task across that data, and inside that map
you have a print
statement, first of all:
- Where will that data be printed out?
- What node has priority and what partition?
- If all nodes are running in parallel, who will be printed first?
- How will be this print queue created?
Those are too many questions, thus the designers/maintainers of apache-spark
decided logically to drop any support to print
statements inside any map-reduce
operation (this include accumulators
and even broadcast
variables).
This also makes sense because Spark is a language designed for very large datasets. While printing can be useful for testing and debugging, you wouldn't want to print every line of a DataFrame or RDD because they are built to have millions or billions of rows! So why deal with these complicated questions when you wouldn't even want to print in the first place?
In order to prove this you can run this scala code for example:
// Let's create a simple RDD
val rdd = sc.parallelize(1 to 10000)
def printStuff(x:Int):Int = {
println(x)
x + 1
}
// It doesn't print anything! because of a logic design limitation!
rdd.map(printStuff)
// But you can print the RDD by doing the following:
rdd.take(10).foreach(println)
Solution 2:
I was able to work it around by making a utility function:
object PrintUtiltity {
def print(data:String) = {
println(data)
}
}