Ignoring NaNs with str.contains
I want to find rows that contain a string, like so:
DF[DF.col.str.contains("foo")]
However, this fails because some elements are NaN:
ValueError: cannot index with vector containing NA / NaN values
So I resort to the obfuscated
DF[DF.col.notnull()][DF.col.dropna().str.contains("foo")]
Is there a better way?
There's a flag for that:
In [11]: df = pd.DataFrame([["foo1"], ["foo2"], ["bar"], [np.nan]], columns=['a'])
In [12]: df.a.str.contains("foo")
Out[12]:
0 True
1 True
2 False
3 NaN
Name: a, dtype: object
In [13]: df.a.str.contains("foo", na=False)
Out[13]:
0 True
1 True
2 False
3 False
Name: a, dtype: bool
See the str.replace
docs:
na : default NaN, fill value for missing values.
So you can do the following:
In [21]: df.loc[df.a.str.contains("foo", na=False)]
Out[21]:
a
0 foo1
1 foo2
In addition to the above answers, I would say for columns having no single word name, you may use:-
df[df['Product ID'].str.contains("foo") == True]
Hope this helps.
df[df.col.str.contains("foo").fillna(False)]