Filtering a Pyspark DataFrame with SQL-like IN clause

Solution 1:

String you pass to SQLContext it evaluated in the scope of the SQL environment. It doesn't capture the closure. If you want to pass a variable you'll have to do it explicitly using string formatting:

df = sc.parallelize([(1, "foo"), (2, "x"), (3, "bar")]).toDF(("k", "v"))
df.registerTempTable("df")
sqlContext.sql("SELECT * FROM df WHERE v IN {0}".format(("foo", "bar"))).count()
##  2 

Obviously this is not something you would use in a "real" SQL environment due to security considerations but it shouldn't matter here.

In practice DataFrame DSL is a much better choice when you want to create dynamic queries:

from pyspark.sql.functions import col

df.where(col("v").isin({"foo", "bar"})).count()
## 2

It is easy to build and compose and handles all details of HiveQL / Spark SQL for you.

Solution 2:

reiterating what @zero323 has mentioned above : we can do the same thing using a list as well (not only set) like below

from pyspark.sql.functions import col

df.where(col("v").isin(["foo", "bar"])).count()

Solution 3:

Just a little addition/update:

choice_list = ["foo", "bar", "jack", "joan"]

If you want to filter your dataframe "df", such that you want to keep rows based upon a column "v" taking only the values from choice_list, then

from pyspark.sql.functions import col

df_filtered = df.where( ( col("v").isin (choice_list) ) )