How to use numpy.genfromtxt when first column is string and the remaining columns are numbers?
Basically, I have a bunch of data where the first column is a string (label) and the remaining columns are numeric values. I run the following:
data = numpy.genfromtxt('data.txt', delimiter = ',')
This reads most of the data well, but the label column just gets 'nan'. How can I deal with this?
Solution 1:
By default, np.genfromtxt
uses dtype=float
: that's why you string columns are converted to NaNs because, after all, they're Not A Number...
You can ask np.genfromtxt
to try to guess the actual type of your columns by using dtype=None
:
>>> from StringIO import StringIO
>>> test = "a,1,2\nb,3,4"
>>> a = np.genfromtxt(StringIO(test), delimiter=",", dtype=None)
>>> print a
array([('a',1,2),('b',3,4)], dtype=[('f0', '|S1'),('f1', '<i8'),('f2', '<i8')])
You can access the columns by using their name, like a['f0']
...
Using dtype=None
is a good trick if you don't know what your columns should be. If you already know what type they should have, you can give an explicit dtype
. For example, in our test, we know that the first column is a string, the second an int, and we want the third to be a float. We would then use
>>> np.genfromtxt(StringIO(test), delimiter=",", dtype=("|S10", int, float))
array([('a', 1, 2.0), ('b', 3, 4.0)],
dtype=[('f0', '|S10'), ('f1', '<i8'), ('f2', '<f8')])
Using an explicit dtype
is much more efficient than using dtype=None
and is the recommended way.
In both cases (dtype=None
or explicit, non-homogeneous dtype
), you end up with a structured array.
[Note: With dtype=None
, the input is parsed a second time and the type of each column is updated to match the larger type possible: first we try a bool, then an int, then a float, then a complex, then we keep a string if all else fails. The implementation is rather clunky, actually. There had been some attempts to make the type guessing more efficient (using regexp), but nothing that stuck so far]
Solution 2:
If your data file is structured like this
col1, col2, col3
1, 2, 3
10, 20, 30
100, 200, 300
then numpy.genfromtxt
can interpret the first line as column headers using the names=True
option. With this you can access the data very conveniently by providing the column header:
data = np.genfromtxt('data.txt', delimiter=',', names=True)
print data['col1'] # array([ 1., 10., 100.])
print data['col2'] # array([ 2., 20., 200.])
print data['col3'] # array([ 3., 30., 300.])
Since in your case the data is formed like this
row1, 1, 10, 100
row2, 2, 20, 200
row3, 3, 30, 300
you can achieve something similar using the following code snippet:
labels = np.genfromtxt('data.txt', delimiter=',', usecols=0, dtype=str)
raw_data = np.genfromtxt('data.txt', delimiter=',')[:,1:]
data = {label: row for label, row in zip(labels, raw_data)}
The first line reads the first column (the labels) into an array of strings.
The second line reads all data from the file but discards the first column.
The third line uses dictionary comprehension to create a dictionary that can be used very much like the structured array which numpy.genfromtxt
creates using the names=True
option:
print data['row1'] # array([ 1., 10., 100.])
print data['row2'] # array([ 2., 20., 200.])
print data['row3'] # array([ 3., 30., 300.])