Can you recommend a Java library for reading (and possibly writing) CSV files? [closed]

We have used http://opencsv.sourceforge.net/ with good success

I also came across another question with good links: Java lib or app to convert CSV to XML file?


Super CSV is a great choice for reading/parsing, validating and mapping CSV files to POJOs!

We (the Super CSV team) have just released a new version (you can download it from SourceForge or Maven).

Reading a CSV file

The following example uses CsvDozerBeanReader (a new reader we've just released that uses Dozer for bean mapping with deep mapping and index-based mapping support) - it's based on the example from our website. If you don't need the Dozer functionality (or you just want a simple standalone dependency), then you can use CsvBeanReader instead (see this code example).

Example CSV file

Here is an example CSV file that represents responses to a survey. It has a header and 3 rows of data, all with 8 columns.

age,consentGiven,questionNo1,answer1,questionNo2,answer2,questionNo3,answer3
18,Y,1,Twelve,2,Albert Einstein,3,Big Bang Theory
,Y,1,Thirteen,2,Nikola Tesla,3,Stargate
42,N,1,,2,Carl Sagan,3,Star Wars

Defining the mapping from CSV to POJO

Each row of CSV will be read into a SurveyResponse class, each of which has a List of Answers. In order for the mapping to work, your classes should be valid Javabeans (i.e have a default no-arg constructor and have getters/setters defined for each field).

In Super CSV you define the mapping with a simple String array - each element of the array corresponds to a column in the CSV file.

With CsvDozerBeanMapper you can use:

  • simple field mappings (e.g. firstName)

  • deep mappings (e.g. address.country.code)

  • indexed mapping (e.g. middleNames[1] - zero-based index for arrays or Collections)

  • deep + indexed mapping (e.g. person.middleNames[1])

The following is the field mapping for this example - it uses a combination of these:

private static final String[] FIELD_MAPPING = new String[] { 
        "age",                   // simple field mapping (like for CsvBeanReader)
        "consentGiven",          // as above
        "answers[0].questionNo", // indexed (first element) + deep mapping
        "answers[0].answer", 
        "answers[1].questionNo", // indexed (second element) + deep mapping
        "answers[1].answer", 
        "answers[2].questionNo", 
        "answers[2].answer" };

Conversion and Validation

Super CSV has a useful library of cell processors, which can be used to convert the Strings from the CSV file to other data types (e.g. Date, Integer), or to do constraint validation (e.g. mandatory/optional, regex matching, range checking).

Using cell processors is entirely optional - without them each column of CSV will be a String, so each field must be a String also.

The following is the cell processor configuration for the example. As with the field mapping, each element in the array represents a CSV column. It demonstrates how cell processors can transform the CSV data to the data type of your field, and how they can be chained together.

final CellProcessor[] processors = new CellProcessor[] { 
    new Optional(new ParseInt()), // age
    new ParseBool(),              // consent
    new ParseInt(),               // questionNo 1
    new Optional(),               // answer 1
    new ParseInt(),               // questionNo 2
    new Optional(),               // answer 2
    new ParseInt(),               // questionNo 3
    new Optional()                // answer 3
};

Reading

Reading with Super CSV is very flexible: you supply your own Reader (so you can read from a file, the classpath, a zip file, etc), and the delimiter and quote character are configurable via preferences (of which there are a number of pre-defined configurations that cater for most usages).

The code below is pretty self-explanatory.

  1. Create the reader (with your Reader and preferences)

  2. (Optionally) read the header

  3. Configure the bean mapping

  4. Keep calling read() until you get a null (end of file)

  5. Close the reader

Code:

ICsvDozerBeanReader beanReader = null;
try {
    beanReader = new CsvDozerBeanReader(new FileReader(CSV_FILENAME),
        CsvPreference.STANDARD_PREFERENCE);

    beanReader.getHeader(true); // ignore the header
    beanReader.configureBeanMapping(SurveyResponse.class, FIELD_MAPPING);

    SurveyResponse surveyResponse;
    while( (surveyResponse = 
        beanReader.read(SurveyResponse.class, processors)) != null ) {
        System.out.println(
            String.format("lineNo=%s, rowNo=%s, surveyResponse=%s",
                beanReader.getLineNumber(), beanReader.getRowNumber(), 
                surveyResponse));
    }

} finally {
    if( beanReader != null ) {
        beanReader.close();
    }
}

Output:

lineNo=2, rowNo=2, surveyResponse=SurveyResponse [age=18, consentGiven=true, answers=[Answer [questionNo=1, answer=Twelve], Answer [questionNo=2, answer=Albert Einstein], Answer [questionNo=3, answer=Big Bang Theory]]]
lineNo=3, rowNo=3, surveyResponse=SurveyResponse [age=null, consentGiven=true, answers=[Answer [questionNo=1, answer=Thirteen], Answer [questionNo=2, answer=Nikola Tesla], Answer [questionNo=3, answer=Stargate]]]
lineNo=4, rowNo=4, surveyResponse=SurveyResponse [age=42, consentGiven=false, answers=[Answer [questionNo=1, answer=null], Answer [questionNo=2, answer=Carl Sagan], Answer [questionNo=3, answer=Star Wars]]]

More Information

You can find a lot more information on the website!


I can recommend SuperCSV. Simple to use, and did everything I needed.


Hey, I have an open-source project for that: JFileHelpers. I think the main advantage is that it uses Java Annotations, take a look:

If you have this bean:

@FixedLengthRecord()
public class Customer {
    @FieldFixedLength(4)
    public Integer custId;

    @FieldAlign(alignMode=AlignMode.Right)
    @FieldFixedLength(20)
    public String name;

    @FieldFixedLength(3)
    public Integer rating;

    @FieldTrim(trimMode=TrimMode.Right)
    @FieldFixedLength(10)
    @FieldConverter(converter = ConverterKind.Date, 
    format = "dd-MM-yyyy")
    public Date addedDate;

    @FieldFixedLength(3)
    @FieldOptional
    public String stockSimbol;    
}

And wants to parse this file:

....|....1....|....2....|....3....|....4                
1   Antonio Pereira     10012-12-1978ABC
2   Felipe Coury          201-01-2007
3   Anderson Polga       4212-11-2007DEF      

All you have to do is this:

FileHelperEngine<Customer> engine = 
    new FileHelperEngine<Customer>(Customer.class); 
List<Customer> customers = 
    new ArrayList<Customer>();

customers = engine.readResource(
    "/samples/customers-fixed.txt");

Also, it supports master-detail, date and format conversion, and a lot more. Let me know what you think!

Best regards!


I find Flatpack to be really good with handling quirky CSV files (escapes, quotes, bad records, etc.)