Is there a generator version of `string.split()` in Python?
Solution 1:
It is highly probable that re.finditer
uses fairly minimal memory overhead.
def split_iter(string):
return (x.group(0) for x in re.finditer(r"[A-Za-z']+", string))
Demo:
>>> list( split_iter("A programmer's RegEx test.") )
['A', "programmer's", 'RegEx', 'test']
edit: I have just confirmed that this takes constant memory in python 3.2.1, assuming my testing methodology was correct. I created a string of very large size (1GB or so), then iterated through the iterable with a for
loop (NOT a list comprehension, which would have generated extra memory). This did not result in a noticeable growth of memory (that is, if there was a growth in memory, it was far far less than the 1GB string).
More general version:
In reply to a comment "I fail to see the connection with str.split
", here is a more general version:
def splitStr(string, sep="\s+"):
# warning: does not yet work if sep is a lookahead like `(?=b)`
if sep=='':
return (c for c in string)
else:
return (_.group(1) for _ in re.finditer(f'(?:^|{sep})((?:(?!{sep}).)*)', string))
# alternatively, more verbosely:
regex = f'(?:^|{sep})((?:(?!{sep}).)*)'
for match in re.finditer(regex, string):
fragment = match.group(1)
yield fragment
The idea is that ((?!pat).)*
'negates' a group by ensuring it greedily matches until the pattern would start to match (lookaheads do not consume the string in the regex finite-state-machine). In pseudocode: repeatedly consume (begin-of-string
xor {sep}
) + as much as possible until we would be able to begin again (or hit end of string)
Demo:
>>> splitStr('.......A...b...c....', sep='...')
<generator object splitStr.<locals>.<genexpr> at 0x7fe8530fb5e8>
>>> list(splitStr('A,b,c.', sep=','))
['A', 'b', 'c.']
>>> list(splitStr(',,A,b,c.,', sep=','))
['', '', 'A', 'b', 'c.', '']
>>> list(splitStr('.......A...b...c....', '\.\.\.'))
['', '', '.A', 'b', 'c', '.']
>>> list(splitStr(' A b c. '))
['', 'A', 'b', 'c.', '']
(One should note that str.split has an ugly behavior: it special-cases having sep=None
as first doing str.strip
to remove leading and trailing whitespace. The above purposefully does not do that; see the last example where sep="\s+"
.)
(I ran into various bugs (including an internal re.error) when trying to implement this... Negative lookbehind will restrict you to fixed-length delimiters so we don't use that. Almost anything besides the above regex seemed to result in errors with the beginning-of-string and end-of-string edge-cases (e.g. r'(.*?)($|,)'
on ',,,a,,b,c'
returns ['', '', '', 'a', '', 'b', 'c', '']
with an extraneous empty string at the end; one can look at the edit history for another seemingly-correct regex that actually has subtle bugs.)
(If you want to implement this yourself for higher performance (although they are heavweight, regexes most importantly run in C), you'd write some code (with ctypes? not sure how to get generators working with it?), with the following pseudocode for fixed-length delimiters: Hash your delimiter of length L. Keep a running hash of length L as you scan the string using a running hash algorithm, O(1) update time. Whenever the hash might equal your delimiter, manually check if the past few characters were the delimiter; if so, then yield substring since last yield. Special case for beginning and end of string. This would be a generator version of the textbook algorithm to do O(N) text search. Multiprocessing versions are also possible. They might seem overkill, but the question implies that one is working with really huge strings... At that point you might consider crazy things like caching byte offsets if few of them, or working from disk with some disk-backed bytestring view object, buying more RAM, etc. etc.)
Solution 2:
The most efficient way I can think of it to write one using the offset
parameter of the str.find()
method. This avoids lots of memory use, and relying on the overhead of a regexp when it's not needed.
[edit 2016-8-2: updated this to optionally support regex separators]
def isplit(source, sep=None, regex=False):
"""
generator version of str.split()
:param source:
source string (unicode or bytes)
:param sep:
separator to split on.
:param regex:
if True, will treat sep as regular expression.
:returns:
generator yielding elements of string.
"""
if sep is None:
# mimic default python behavior
source = source.strip()
sep = "\\s+"
if isinstance(source, bytes):
sep = sep.encode("ascii")
regex = True
if regex:
# version using re.finditer()
if not hasattr(sep, "finditer"):
sep = re.compile(sep)
start = 0
for m in sep.finditer(source):
idx = m.start()
assert idx >= start
yield source[start:idx]
start = m.end()
yield source[start:]
else:
# version using str.find(), less overhead than re.finditer()
sepsize = len(sep)
start = 0
while True:
idx = source.find(sep, start)
if idx == -1:
yield source[start:]
return
yield source[start:idx]
start = idx + sepsize
This can be used like you want...
>>> print list(isplit("abcb","b"))
['a','c','']
While there is a little bit of cost seeking within the string each time find() or slicing is performed, this should be minimal since strings are represented as continguous arrays in memory.
Solution 3:
Did some performance testing on the various methods proposed (I won't repeat them here). Some results:
-
str.split
(default = 0.3461570239996945 - manual search (by character) (one of Dave Webb's answer's) = 0.8260340550004912
-
re.finditer
(ninjagecko's answer) = 0.698872097000276 -
str.find
(one of Eli Collins's answers) = 0.7230395330007013 -
itertools.takewhile
(Ignacio Vazquez-Abrams's answer) = 2.023023967998597 -
str.split(..., maxsplit=1)
recursion = N/A†
†The recursion answers (string.split
with maxsplit = 1
) fail to complete in a reasonable time, given string.split
s speed they may work better on shorter strings, but then I can't see the use-case for short strings where memory isn't an issue anyway.
Tested using timeit
on:
the_text = "100 " * 9999 + "100"
def test_function( method ):
def fn( ):
total = 0
for x in method( the_text ):
total += int( x )
return total
return fn
This raises another question as to why string.split
is so much faster despite its memory usage.
Solution 4:
This is generator version of split()
implemented via re.search()
that does not have the problem of allocating too many substrings.
import re
def itersplit(s, sep=None):
exp = re.compile(r'\s+' if sep is None else re.escape(sep))
pos = 0
while True:
m = exp.search(s, pos)
if not m:
if pos < len(s) or sep is not None:
yield s[pos:]
break
if pos < m.start() or sep is not None:
yield s[pos:m.start()]
pos = m.end()
sample1 = "Good evening, world!"
sample2 = " Good evening, world! "
sample3 = "brackets][all][][over][here"
sample4 = "][brackets][all][][over][here]["
assert list(itersplit(sample1)) == sample1.split()
assert list(itersplit(sample2)) == sample2.split()
assert list(itersplit(sample3, '][')) == sample3.split('][')
assert list(itersplit(sample4, '][')) == sample4.split('][')
EDIT: Corrected handling of surrounding whitespace if no separator chars are given.