How to efficiently parse fixed width files?

I am trying to find an efficient way of parsing files that holds fixed width lines. For example, the first 20 characters represent a column, from 21:30 another one and so on.

Assuming that the line holds 100 characters, what would be an efficient way to parse a line into several components?

I could use string slicing per line, but it's a little bit ugly if the line is big. Are there any other fast methods?


Solution 1:

Using the Python standard library's struct module would be fairly easy as well as extremely fast since it's written in C.

Here's how it could be used to do what you want. It also allows columns of characters to be skipped by specifying negative values for the number of characters in the field.

import struct

fieldwidths = (2, -10, 24)  # negative widths represent ignored padding fields
fmtstring = ' '.join('{}{}'.format(abs(fw), 'x' if fw < 0 else 's')
                        for fw in fieldwidths)
fieldstruct = struct.Struct(fmtstring)
parse = fieldstruct.unpack_from
print('fmtstring: {!r}, recsize: {} chars'.format(fmtstring, fieldstruct.size))

line = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789\n'
fields = parse(line)
print('fields: {}'.format(fields))

Output:

fmtstring: '2s 10x 24s', recsize: 36 chars
fields: ('AB', 'MNOPQRSTUVWXYZ0123456789')

The following modifications would adapt it work in Python 2 or 3 (and handle Unicode input):

import struct
import sys

fieldstruct = struct.Struct(fmtstring)
if sys.version_info[0] < 3:
    parse = fieldstruct.unpack_from
else:
    # converts unicode input to byte string and results back to unicode string
    unpack = fieldstruct.unpack_from
    parse = lambda line: tuple(s.decode() for s in unpack(line.encode()))

Here's a way to do it with string slices, as you were considering but were concerned that it might get too ugly. The nice thing about it is, besides not being all that ugly, is that it works unchanged in both Python 2 and 3, as well as being able to handle Unicode strings. Speed-wise it is, of course, slower than the versions based the struct module, but could be sped-up slightly by removing the ability to have padding fields.

try:
    from itertools import izip_longest  # added in Py 2.6
except ImportError:
    from itertools import zip_longest as izip_longest  # name change in Py 3.x

try:
    from itertools import accumulate  # added in Py 3.2
except ImportError:
    def accumulate(iterable):
        'Return running totals (simplified version).'
        total = next(iterable)
        yield total
        for value in iterable:
            total += value
            yield total

def make_parser(fieldwidths):
    cuts = tuple(cut for cut in accumulate(abs(fw) for fw in fieldwidths))
    pads = tuple(fw < 0 for fw in fieldwidths) # bool values for padding fields
    flds = tuple(izip_longest(pads, (0,)+cuts, cuts))[:-1]  # ignore final one
    parse = lambda line: tuple(line[i:j] for pad, i, j in flds if not pad)
    # optional informational function attributes
    parse.size = sum(abs(fw) for fw in fieldwidths)
    parse.fmtstring = ' '.join('{}{}'.format(abs(fw), 'x' if fw < 0 else 's')
                                                for fw in fieldwidths)
    return parse

line = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789\n'
fieldwidths = (2, -10, 24)  # negative widths represent ignored padding fields
parse = make_parser(fieldwidths)
fields = parse(line)
print('format: {!r}, rec size: {} chars'.format(parse.fmtstring, parse.size))
print('fields: {}'.format(fields))

Output:

format: '2s 10x 24s', rec size: 36 chars
fields: ('AB', 'MNOPQRSTUVWXYZ0123456789')

Solution 2:

I'm not really sure if this is efficient, but it should be readable (as opposed to do the slicing manually). I defined a function slices that gets a string and column lengths, and returns the substrings. I made it a generator, so for really long lines, it doesn't build a temporary list of substrings.

def slices(s, *args):
    position = 0
    for length in args:
        yield s[position:position + length]
        position += length

Example

In [32]: list(slices('abcdefghijklmnopqrstuvwxyz0123456789', 2))
Out[32]: ['ab']

In [33]: list(slices('abcdefghijklmnopqrstuvwxyz0123456789', 2, 10, 50))
Out[33]: ['ab', 'cdefghijkl', 'mnopqrstuvwxyz0123456789']

In [51]: d,c,h = slices('dogcathouse', 3, 3, 5)
In [52]: d,c,h
Out[52]: ('dog', 'cat', 'house')

But I think the advantage of a generator is lost if you need all columns at once. Where one could benefit from is when you want to process columns one by one, say in a loop.

Solution 3:

Two more options that are easier and prettier than already mentioned solutions:

The first is using pandas:

import pandas as pd

path = 'filename.txt'

# Using Pandas with a column specification
col_specification = [(0, 20), (21, 30), (31, 50), (51, 100)]
data = pd.read_fwf(path, colspecs=col_specification)

And the second option using numpy.loadtxt:

import numpy as np

# Using NumPy and letting it figure it out automagically
data_also = np.loadtxt(path)

It really depends on in what way you want to use your data.

Solution 4:

The code below gives a sketch of what you might want to do if you have some serious fixed-column-width file handling to do.

"Serious" = multiple record types in each of multiple file types, records up to 1000 bytes, the layout-definer and "opposing" producer/consumer is a government department with attitude, layout changes result in unused columns, up to a million records in a file, ...

Features: Precompiles the struct formats. Ignores unwanted columns. Converts input strings to required data types (sketch omits error handling). Converts records to object instances (or dicts, or named tuples if you prefer).

Code:

import struct, datetime, io, pprint

# functions for converting input fields to usable data
cnv_text = rstrip
cnv_int = int
cnv_date_dmy = lambda s: datetime.datetime.strptime(s, "%d%m%Y") # ddmmyyyy
# etc

# field specs (field name, start pos (1-relative), len, converter func)
fieldspecs = [
    ('surname', 11, 20, cnv_text),
    ('given_names', 31, 20, cnv_text),
    ('birth_date', 51, 8, cnv_date_dmy),
    ('start_date', 71, 8, cnv_date_dmy),
    ]

fieldspecs.sort(key=lambda x: x[1]) # just in case

# build the format for struct.unpack
unpack_len = 0
unpack_fmt = ""
for fieldspec in fieldspecs:
    start = fieldspec[1] - 1
    end = start + fieldspec[2]
    if start > unpack_len:
        unpack_fmt += str(start - unpack_len) + "x"
    unpack_fmt += str(end - start) + "s"
    unpack_len = end
field_indices = range(len(fieldspecs))
print unpack_len, unpack_fmt
unpacker = struct.Struct(unpack_fmt).unpack_from

class Record(object):
    pass
    # or use named tuples

raw_data = """\
....v....1....v....2....v....3....v....4....v....5....v....6....v....7....v....8
          Featherstonehaugh   Algernon Marmaduke  31121969            01012005XX
"""

f = cStringIO.StringIO(raw_data)
headings = f.next()
for line in f:
    # The guts of this loop would of course be hidden away in a function/method
    # and could be made less ugly
    raw_fields = unpacker(line)
    r = Record()
    for x in field_indices:
        setattr(r, fieldspecs[x][0], fieldspecs[x][3](raw_fields[x]))
    pprint.pprint(r.__dict__)
    print "Customer name:", r.given_names, r.surname

Output:

78 10x20s20s8s12x8s
{'birth_date': datetime.datetime(1969, 12, 31, 0, 0),
 'given_names': 'Algernon Marmaduke',
 'start_date': datetime.datetime(2005, 1, 1, 0, 0),
 'surname': 'Featherstonehaugh'}
Customer name: Algernon Marmaduke Featherstonehaugh

Solution 5:

> str = '1234567890'
> w = [0,2,5,7,10]
> [ str[ w[i-1] : w[i] ] for i in range(1,len(w)) ]
['12', '345', '67', '890']