Import CSV file as a pandas DataFrame

What's the Python way to read in a CSV file into a pandas DataFrame (which I can then use for statistical operations, can have differently-typed columns, etc.)?

My CSV file "value.txt" has the following content:

Date,"price","factor_1","factor_2"
2012-06-11,1600.20,1.255,1.548
2012-06-12,1610.02,1.258,1.554
2012-06-13,1618.07,1.249,1.552
2012-06-14,1624.40,1.253,1.556
2012-06-15,1626.15,1.258,1.552
2012-06-16,1626.15,1.263,1.558
2012-06-17,1626.15,1.264,1.572

In R we would read this file in using:

price <- read.csv("value.txt")  

and that would return an R data.frame:

> price <- read.csv("value.txt")
> price
     Date   price factor_1 factor_2
1  2012-06-11 1600.20    1.255    1.548
2  2012-06-12 1610.02    1.258    1.554
3  2012-06-13 1618.07    1.249    1.552
4  2012-06-14 1624.40    1.253    1.556
5  2012-06-15 1626.15    1.258    1.552
6  2012-06-16 1626.15    1.263    1.558
7  2012-06-17 1626.15    1.264    1.572

Is there a Pythonic way to get the same functionality?


Solution 1:

pandas to the rescue:

import pandas as pd
print pd.read_csv('value.txt')

        Date    price  factor_1  factor_2
0  2012-06-11  1600.20     1.255     1.548
1  2012-06-12  1610.02     1.258     1.554
2  2012-06-13  1618.07     1.249     1.552
3  2012-06-14  1624.40     1.253     1.556
4  2012-06-15  1626.15     1.258     1.552
5  2012-06-16  1626.15     1.263     1.558
6  2012-06-17  1626.15     1.264     1.572

This returns pandas DataFrame that is similar to R's.

Solution 2:

To read a CSV file as a pandas DataFrame, you'll need to use pd.read_csv.

But this isn't where the story ends; data exists in many different formats and is stored in different ways so you will often need to pass additional parameters to read_csv to ensure your data is read in properly.

Here's a table listing common scenarios encountered with CSV files along with the appropriate argument you will need to use. You will usually need all or some combination of the arguments below to read in your data.

┌──────────────────────────────────────────────────────────┬─────────────────────────────┬────────────────────────────────────────────────────────┐
│  ScenarioArgumentExample                                               │
├──────────────────────────────────────────────────────────┼─────────────────────────────┼────────────────────────────────────────────────────────┤
│  Read CSV with different separator¹                      │  sep/delimiter              │  read_csv(..., sep=';')                                │
│  Read CSV with tab/whitespace separator                  │  delim_whitespace           │  read_csv(..., delim_whitespace=True)                  │
│  Fix UnicodeDecodeError while reading²                   │  encoding                   │  read_csv(..., encoding='latin-1')                     │
│  Read CSV without headers³                               │  header and names           │  read_csv(..., header=False, names=['x', 'y', 'z'])    │
│  Specify which column to set as the index⁴               │  index_col                  │  read_csv(..., index_col=[0])                          │
│  Read subset of columns                                  │  usecols                    │  read_csv(..., usecols=['x', 'y'])                     │
│  Numeric data is in European format (eg., 1.234,56)      │  thousands and decimal      │  read_csv(..., thousands='.', decimal=',')             │
└──────────────────────────────────────────────────────────┴─────────────────────────────┴────────────────────────────────────────────────────────┘

Footnotes

  1. By default, read_csv uses a C parser engine for performance. The C parser can only handle single character separators. If your CSV has a multi-character separator, you will need to modify your code to use the 'python' engine. You can also pass regular expressions:

    df = pd.read_csv(..., sep=r'\s*\|\s*', engine='python')
    
  2. UnicodeDecodeError occurs when the data was stored in one encoding format but read in a different, incompatible one. Most common encoding schemes are 'utf-8' and 'latin-1', your data is likely to fit into one of these.

  3. header=False specifies that the first row in the CSV is a data row rather than a header row, and the names=[...] allows you to specify a list of column names to assign to the DataFrame when it is created.

  4. "Unnamed: 0" occurs when a DataFrame with an un-named index is saved to CSV and then re-read after. Instead of having to fix the issue while reading, you can also fix the issue when writing by using

    df.to_csv(..., index=False)
    

There are other arguments I've not mentioned here, but these are the ones you'll encounter most frequently.

Solution 3:

Here's an alternative to pandas library using Python's built-in csv module.

import csv
from pprint import pprint
with open('foo.csv', 'rb') as f:
    reader = csv.reader(f)
    headers = reader.next()
    column = {h:[] for h in headers}
    for row in reader:
        for h, v in zip(headers, row):
            column[h].append(v)
    pprint(column)    # Pretty printer

will print

{'Date': ['2012-06-11',
          '2012-06-12',
          '2012-06-13',
          '2012-06-14',
          '2012-06-15',
          '2012-06-16',
          '2012-06-17'],
 'factor_1': ['1.255', '1.258', '1.249', '1.253', '1.258', '1.263', '1.264'],
 'factor_2': ['1.548', '1.554', '1.552', '1.556', '1.552', '1.558', '1.572'],
 'price': ['1600.20',
           '1610.02',
           '1618.07',
           '1624.40',
           '1626.15',
           '1626.15',
           '1626.15']}