Spark: overwrite partitioned folders

I have a workflow on Spark 3.1 and writing a dataframe in the end partitioned by year,month,day,hour to S3. I expect the files in each "folder" in S3 to be overwritten but they're always appended. Any idea as to what might be the problem?

spark.conf.set("spark.sql.sources.partitionOverwriteMode", "dynamic")

df
  .write
  .mode(SaveMode.Overwrite)
  .partitionBy("year", "month", "day", "hour")
  .json(outputPath)


Solution 1:

It seems this is a bug on Spark 3.1. Downgrading to Spark 3.0.1 helps.