How to get the n next values of a generator in a list (python)

Use itertools.islice:

list(itertools.islice(it, n))

TL;DR: Use itertools.islice.

Originally I wrote another answer, that turned out to be a bad idea:

[next(it) for _ in range(n)]

This crashes when it yields less than n values, and this behaviour depends on subtle issues, so people reading such code are unlikely to understand it's precise semantics.

What happens if next(it) was exhausted and raises StopIteration?

(i.e. when it had less than n values to yield)

When I wrote the above line a couple years ago, I probably thought a StopIteration will have the clever side effect of cleanly terminating the list comprehension. But no, the whole comprehension will crash passing the StopIteration upwards. (It'd exit cleanly only if the exception originated from the range(n) iterator.)

Which is probably not the behavior you want.

But it gets worse. The following is supposed to be equivalent to the list comprehension (especially on Python 3):

list(next(it) for _ in range(n))

It isn't. The inner part is shorthand for a generator function; list() knows it's done when it raises StopIteration anywhere.
=> This version copes safely when there aren't n values and returns a shorter list. (Like itertools.islice().)

[Executions on: 2.7, 3.4]

But that's too going to change! The fact a generator silently exits when any code inside it raises StopIteration is a known wart, addressed by PEP 479. From Python 3.7 (or 3.5 with a future import) that's going to cause a RuntimeError instead of cleanly finishing the generator. I.e. it'll become similar to the list comprehension's behaviour. (Tested on a recent HEAD build)


To get the first n values of a generator, you can use more_itertools.take.

If you plan to iterate over the words in chunks (eg. 100 at a time), you can use more_itertools.chunked (https://more-itertools.readthedocs.io/en/latest/api.html):

import more_itertools
for words in more_itertools.chunked(reader, n=100):
    # process 100 words