Spark load data and add filename as dataframe column

I am loading some data into Spark with a wrapper function:

def load_data( filename ):
    df = sqlContext.read.format("com.databricks.spark.csv")\
        .option("delimiter", "\t")\
        .option("header", "false")\
        .option("mode", "DROPMALFORMED")\
        .load(filename)
    # add the filename base as hostname
    ( hostname, _ ) = os.path.splitext( os.path.basename(filename) )
    ( hostname, _ ) = os.path.splitext( hostname )
    df = df.withColumn('hostname', lit(hostname))
    return df

specifically, I am using a glob to load a bunch of files at once:

df = load_data( '/scratch/*.txt.gz' )

the files are:

/scratch/host1.txt.gz
/scratch/host2.txt.gz
...

I would like the column 'hostname' to actually contain the real name of the file being loaded rather than the glob (ie host1, host2 etc, rather than *).

How can I do this?


You can use input_file_name which:

Creates a string column for the file name of the current Spark task.

from  pyspark.sql.functions import input_file_name

df.withColumn("filename", input_file_name())

Same thing in Scala:

import org.apache.spark.sql.functions.input_file_name

df.withColumn("filename", input_file_name)