Python string 'join' is faster (?) than '+', but what's wrong here?
I asked the most efficient method for mass dynamic string concatenation in an earlier post and I was suggested to use the join method, the best, simplest and fastest method to do so (as everyone said that). But while I was playing with string concatenations, I found some weird(?) results. I'm sure something is going on but I can't not get it quite. Here is what I did:
I defined these functions:
import timeit
def x():
s=[]
for i in range(100):
# Other codes here...
s.append("abcdefg"[i%7])
return ''.join(s)
def y():
s=''
for i in range(100):
# Other codes here...
s+="abcdefg"[i%7]
return s
def z():
s=''
for i in range(100):
# Other codes here...
s=s+"abcdefg"[i%7]
return s
def p():
s=[]
for i in range(100):
# Other codes here...
s+="abcdefg"[i%7]
return ''.join(s)
def q():
s=[]
for i in range(100):
# Other codes here...
s = s + ["abcdefg"[i%7]]
return ''.join(s)
I have tried to keep other things (except the concatenation) almost same throughout the functions. Then I tested with the following with results in comment (using Python 3.1.1 IDLE on Windows 32 bit machine):
timeit.timeit(x) # 31.54912480500002
timeit.timeit(y) # 23.533029429999942
timeit.timeit(z) # 22.116181330000018
timeit.timeit(p) # 37.718607439999914
timeit.timeit(q) # 108.60377576499991
That means it shows that strng = strng + dyn_strng is the fastest. Though the difference in times are not that significant (except the last one), but I wanna know why this is happening. Is that because I am using Python 3.1.1 and that provides '+' as most efficient? Should I use '+' as an alternative to join? Or, have I done something extremely silly? Or what? Please explain clearly.
Some of us Python committers, I believe mostly Rigo and Hettinger, went out of their way (on the way to 2.5 I believe) to optimize some special cases of the alas-far-too-common s += something
blight, arguing that it was proven that beginners will never be covinced that ''.join
is the right way to go and the horrible slowness of the +=
might be giving Python a bad name. Others of us weren't that hot, because they just couldn't possibly optimize every occurrence (or even just a majority of them) to decent performance; but we didn't feel hotly enough on the issue to try and actively block them.
I believe this thread proves we should have opposed them more sternly. As it is now, they optimized +=
in a certain hard-to-predict subset of cases to where it can be maybe 20% faster for particular stupid cases than the proper way (which IS still ''.join
) -- just a perfect way to trap beginners into pursuing those irrelevant 20% gains by using the wrong idiom... at the cost, once in a while and from their POV out of the blue, of being hit with a performance loss of 200% (or more, since non-linear behavior IS still lurking there just outside of the corners that Hettinger and Rigo prettied up and put flowers in;-) -- one that MATTERS, one that WILL make them miserable. This goes against the grain of Python's "ideally only one obvious way to do it" and it feels to me like we, collectively, have lain a trap for beginners -- the best kind, too... those who don't just accept what they're told by their "betters", but inquisitively go and question and explore.
Ah well -- I give up. OP, @mshsayem, go ahead, use += everywhere, enjoy your irrelevant 20% speedups in trivial, tiny, irrelevant cases, and you'd better enjoy them to the hilt -- because one day, when you can't see it coming, on an IMPORTANT, LARGE operation, you'll be hit smack in the midriff by the oncoming trailer truck of a 200% slowdown (unless you get unlucky and it's a 2000% one;-). Just remember: if you ever feel that "Python is horribly slow", REMEMBER, more likely than not it's one of your beloved loops of +=
turning around and biting the hand that feeds it.
For the rest of us -- those who understand what it means to say We should forget about small efficiencies, say about 97% of the time, I'll keep heartily recommending ''.join
, so we all can sleep in all tranquility and KNOW we won't be hit with a superlinear slowdown when we least expect and least can afford you. But for you, Armin Rigo, and Raymond Hettinger (the last two, dear personal friends of mine, BTW, not just co-commiters;-) -- may your +=
be smooth and your big-O's never worse than N!-)
So, for the rest of us, here's a more meaningful and interesting set of measurements:
$ python -mtimeit -s'r=[str(x)*99 for x in xrange(100,1000)]' 's="".join(r)'
1000 loops, best of 3: 319 usec per loop
900 strings of 297 chars each, joining the list directly is of course fastest, but the OP is terrified about having to do appends before then. But:
$ python -mtimeit -s'r=[str(x)*99 for x in xrange(100,1000)]' 's=""' 'for x in r: s+=x'
1000 loops, best of 3: 779 usec per loop
$ python -mtimeit -s'r=[str(x)*99 for x in xrange(100,1000)]' 'z=[]' 'for x in r: z.append(x)' '"".join(z)'
1000 loops, best of 3: 538 usec per loop
...with a semi-important amount of data (a very few 100's of KB -- taking a measurable fraction of a millisecond every which way), even plain good old .append
is alread superior. In addition, it's obviously and trivially easy to optimize:
$ python -mtimeit -s'r=[str(x)*99 for x in xrange(100,1000)]' 'z=[]; zap=z.append' 'for x in r: zap(x)' '"".join(z)'
1000 loops, best of 3: 438 usec per loop
shaving another tenths of a millisecond over the average looping time. Everybody (at least everybody who's totally obsessed abound performance) obviously knows that HOISTING (taking OUT of the inner loop a repetitive computation that would be otherwise performed over and over) is a crucial technique in optimization -- Python doesn't hoist on your behalf, so you have to do your own hoisting in those rare occasions where every microsecond matters.
As to why q
is a lot slower: when you say
l += "a"
you are appending the string "a"
to the end of l
, but when you say
l = l + ["a"]
you are creating a new list with the contents of l
and ["a"]
and then reassigning the results back to l
. Thus new lists are constantly being generated.
I assume x() is slower because you're first building the array and then joining it. So you're not only measuring the time that join takes, but also the time that you take to build the array.
In a scenario where you already have an array and you want to create a string out of its elements, join should be faster than iterating through the array and building the string step by step.
This question is really about what things cost. We'll play a bit fast and loose here, subtracting results in similar cases. You can decide for yourself if this is a valid method. Here are some basic test cases:
import timeit
def append_to_list_with_join():
s=[]
for i in xrange(100):
s.append("abcdefg"[i%7])
return ''.join(s)
def append_to_list_with_join_opt():
s=[]
x = s.append
for i in xrange(100):
x("abcdefg"[i%7])
return ''.join(s)
def plus_equals_string():
s=''
for i in xrange(100):
s+="abcdefg"[i%7]
return s
def plus_assign_string():
s=''
for i in xrange(100):
s=s+"abcdefg"[i%7]
return s
def list_comp_join():
return ''.join(["abcdefg"[i%7] for i in xrange(100)])
def list_comp():
return ["abcdefg"[i%7] for i in xrange(100)]
def empty_loop():
for i in xrange(100):
pass
def loop_mod():
for i in xrange(100):
a = "abcdefg"[i%7]
def fast_list_join():
return "".join(["0"] * 100)
for f in [append_to_list_with_join, append_to_list_with_join_opt, plus_equals_string,plus_assign_string,list_comp_join, list_comp, empty_loop,loop_mod, fast_list_join]:
print f.func_name, timeit.timeit(f)
And here is what they cost:
append_to_list_with_join 25.4540209021
append_to_list_with_join_opt 19.9999782794
plus_equals_string 16.7842428996
plus_assign_string 14.8312124167
list_comp_join 16.329590353
list_comp 14.6934344309
empty_loop 2.3819276612
loop_mod 10.1424356308
fast_list_join 2.58149394686
First off, lots of things have unexpected costs in python. append_to_list_with_join versus append_to_list_with_join_opt shows that even looking up a method on an object has a non-negligible cost. In this case, looking up s.append is one-quarter of the time.
Next, list_comp_join versus list_comp shows that join() is pretty fast: It takes about 1.7 or only 10% of list_comp_join's time.
loop_mod shows that the greatest part of this test is actually in setting up the data, irrespective of which string construction method is used. By inference, the time taken for "string = string + ", "string += ", and list comprehension are:
plus_equals_string = 16.78 - 10.14 = 6.64
plus_assign_string = 14.83 - 10.14 = 4.69
list_comp = 14.69 - 10.14 = 4.55
So as to the OP's question, join() is fast but the time to create the underlying list, whether with list primitives or list comprehension, is comparable to creating the string with string primitives. If you already have a list, convert it to a string with join() -- it will be fast.
The timings the OP presents indicate that constructing lists using concatenate operators is slow. In contrast, using list comprehensions is fast. If you have to build a list, use a list comprehension.
Finally, let's take three of the OP's closest functions: what is the difference between x, p, and q? Let's simplify a bit:
import timeit
def x():
s=[]
for i in range(100):
s.append("c")
def p():
s=[]
for i in range(100):
s += "c"
def q():
s=[]
for i in range(100):
s = s + ["c"]
for f in [x,p,q]:
print f.func_name, timeit.timeit(f)
Here are the results:
x 16.0757342064
p 87.1533697719
q 85.0999698984
And here is the disassembly:
>>> import dis
>>> dis.dis(x)
2 0 BUILD_LIST 0
3 STORE_FAST 0 (s)
3 6 SETUP_LOOP 33 (to 42)
9 LOAD_GLOBAL 0 (range)
12 LOAD_CONST 1 (100)
15 CALL_FUNCTION 1
18 GET_ITER
>> 19 FOR_ITER 19 (to 41)
22 STORE_FAST 1 (i)
4 25 LOAD_FAST 0 (s)
28 LOAD_ATTR 1 (append)
31 LOAD_CONST 2 ('c')
34 CALL_FUNCTION 1
37 POP_TOP
38 JUMP_ABSOLUTE 19
>> 41 POP_BLOCK
>> 42 LOAD_CONST 0 (None)
45 RETURN_VALUE
>>> dis.dis(p)
2 0 BUILD_LIST 0
3 STORE_FAST 0 (s)
3 6 SETUP_LOOP 30 (to 39)
9 LOAD_GLOBAL 0 (range)
12 LOAD_CONST 1 (100)
15 CALL_FUNCTION 1
18 GET_ITER
>> 19 FOR_ITER 16 (to 38)
22 STORE_FAST 1 (i)
4 25 LOAD_FAST 0 (s)
28 LOAD_CONST 2 ('c')
31 INPLACE_ADD
32 STORE_FAST 0 (s)
35 JUMP_ABSOLUTE 19
>> 38 POP_BLOCK
>> 39 LOAD_CONST 0 (None)
42 RETURN_VALUE
>>> dis.dis(q)
2 0 BUILD_LIST 0
3 STORE_FAST 0 (s)
3 6 SETUP_LOOP 33 (to 42)
9 LOAD_GLOBAL 0 (range)
12 LOAD_CONST 1 (100)
15 CALL_FUNCTION 1
18 GET_ITER
>> 19 FOR_ITER 19 (to 41)
22 STORE_FAST 1 (i)
4 25 LOAD_FAST 0 (s)
28 LOAD_CONST 2 ('c')
31 BUILD_LIST 1
34 BINARY_ADD
35 STORE_FAST 0 (s)
38 JUMP_ABSOLUTE 19
>> 41 POP_BLOCK
>> 42 LOAD_CONST 0 (None)
45 RETURN_VALUE
The loops are nearly identical. The comparison amounts to CALL_FUNCTION+POP_TOP vs. INPLACE_ADD+STORE_FAST vs. BUILD_LIST+BINARY_ADD+STORE_FAST. However, I can't give a more low-level explanation than that -- I just can't find costs of python bytecodes on the Net. However, you might get some inspiration from looking at Doug Hellmann's Python Module of the Week posting on dis.