Length of longest word in a list
What is the more pythonic way of getting the length of the longest word:
len(max(words, key=len))
Or:
max(len(w) for w in words)
Or.. something else? words
is a list of strings.
I am finding I need to do this often and after timing with a few different sample sizes the first way seems to be consistently faster, despite seeming less efficient at face value (the redundancy of len
being called twice seems not to matter - does more happen in C code in this form?).
Solution 1:
Although:
max(len(w) for w in words)
does kind of "read" easier - you've got the overhead of a generator.
While:
len(max(words, key=len))
can optimise away with the key using builtins and since len
is normally a very efficient op for strings, is going to be faster...
Solution 2:
I think both are OK, but I think that unless speed is a big consideration that max(len(w) for w in words)
is the most readable.
When I was looking at them, it took me longer to figure out what len(max(words, key=len))
was doing, and I was still wrong until I thought about it more. Code should be immediately obvious unless there's a good reason for it not to be.
It's clear from the other posts (and my own tests) that the less readable one is faster. But it's not like either of them are dog slow. And unless the code is on a critical path it's not worth worrying about.
Ultimately, I think more readable is more Pythonic.
As an aside, this one of the few cases in which Python 2 is notably faster than Python 3 for the same task.
Solution 3:
If you rewrite the generator expression as a map
call (or, for 2.x, imap
):
max(map(len, words))
… it's actually a bit faster than the key version, not slower.
python.org 64-bit 3.3.0:
In [186]: words = ['now', 'is', 'the', 'winter', 'of', 'our', 'partyhat'] * 100
In [188]: %timeit max(len(w) for w in words)
%10000 loops, best of 3: 90.1 us per loop
In [189]: %timeit len(max(words, key=len))
10000 loops, best of 3: 57.3 us per loop
In [190]: %timeit max(map(len, words))
10000 loops, best of 3: 53.4 us per loop
Apple 64-bit 2.7.2:
In [298]: words = ['now', 'is', 'the', 'winter', 'of', 'our', 'partyhat'] * 100
In [299]: %timeit max(len(w) for w in words)
10000 loops, best of 3: 99 us per loop
In [300]: %timeit len(max(words, key=len))
10000 loops, best of 3: 64.1 us per loop
In [301]: %timeit max(map(len, words))
10000 loops, best of 3: 67 us per loop
In [303]: %timeit max(itertools.imap(len, words))
10000 loops, best of 3: 63.4 us per loop
I think it's more pythonic than the key
version, for the same reason the genexp is.
It's arguable whether it's as pythonic as the genexp version. Some people love map
/filter
/reduce
/etc.; some hate them; my personal feeling is that when you're trying to map a function that already exists and has a nice name (that is, something you don't have to lambda
or partial
up), map
is nicer, but YMMV (especially if your name is Guido).
One last point:
the redundancy of len being called twice seems not to matter - does more happen in C code in this form?
Think about it like this: You're already calling len
N times. Calling it N+1
times instead is hardly likely to make a difference, compared to anything you have to do N
times, unless you have a tiny number of huge strings.
Solution 4:
I'd say
len(max(x, key=len))
looks quite good because you utilize a keyword argument (key
) of a built-in (max
) with a built-in (len
). So basically max(x, key=len)
gets you almost the answer. But none of your code variants look particularly un-pythonic to me.