Reading csv files with quoted fields containing embedded commas
I noticed that your problematic line has escaping that uses double quotes themselves:
"32 XIY ""W"" JK, RE LK"
which should be interpreter just as
32 XIY "W" JK, RE LK
As described in RFC-4180, page 2 -
- If double-quotes are used to enclose fields, then a double-quote appearing inside a field must be escaped by preceding it with another double quote
That's what Excel does, for example, by default.
Although in Spark (as of Spark 2.1), escaping is done by default through non-RFC way, using backslah (\). To fix this you have to explicitly tell Spark to use doublequote to use as an escape character:
.option("quote", "\"")
.option("escape", "\"")
This may explain that a comma character wasn't interpreted correctly as it was inside a quoted column.
Options for Spark csv format are not documented well on Apache Spark site, but here's a bit older documentation which I still find useful quite often:
https://github.com/databricks/spark-csv
Update Aug 2018: Spark 3.0 might change this behavior to be RFC-compliant. See SPARK-22236 for details.
For anyone doing this in Scala: Tagar's answer nearly worked for me (thank you!); all I had to do was escape the double quote when setting my option param:
.option("quote", "\"")
.option("escape", "\"")
I'm using Spark 2.3, so I can confirm Tagar's solution still seems to work the same under the new release.
For anyone who is still wondering if their parse is still not working after using Tagar's solution.
Pyspark 3.1.2
.option("quote", "\"")
is the default so this is not necessary however in my case I have data with multiple lines and so spark was unable to auto detect \n
in a single data point and at the end of every row so using .option("multiline", True)
solved my issue along with .option('escape', "\"")
So generally its better to use the multiline option by default
Delimiter(comma
) specified inside quotes
will be ignored by default. Spark SQL does have inbuilt CSV reader in Spark 2.0.
df = session.read
.option("header", "true")
.csv("csv/file/path")
more about CSV reader here - .