Pyspark replace strings in Spark dataframe column
I'd like to perform some basic stemming on a Spark Dataframe column by replacing substrings. What's the quickest way to do this?
In my current use case, I have a list of addresses that I want to normalize. For example this dataframe:
id address
1 2 foo lane
2 10 bar lane
3 24 pants ln
Would become
id address
1 2 foo ln
2 10 bar ln
3 24 pants ln
Solution 1:
For Spark 1.5 or later, you can use the functions package:
from pyspark.sql.functions import *
newDf = df.withColumn('address', regexp_replace('address', 'lane', 'ln'))
Quick explanation:
- The function
withColumn
is called to add (or replace, if the name exists) a column to the data frame. - The function
regexp_replace
will generate a new column by replacing all substrings that match the pattern.
Solution 2:
For scala
import org.apache.spark.sql.functions.regexp_replace
import org.apache.spark.sql.functions.col
data.withColumn("addr_new", regexp_replace(col("addr_line"), "\\*", ""))