How Are Lazy Sequences Implemented in Clojure?
Let's do this.
• I know that lazy sequences only evaluate the items in the sequence that are asked for, how does it do this?
Lazy sequences (henceforth LS, because I am a LP, or Lazy Person) are composed of parts. The head, or the part(s, as really 32 elements are evaluated at a time, as of Clojure 1.1, and I think 1.2) of the sequence that have been evaluated, is followed by something called a thunk, which is basically a chunk of information (think of it as the rest of the your function that creates the sequence, unevaluated) waiting to be called. When it is called, the thunk evaluates however much is asked of it, and a new thunk is created, with context as necessary (how much has been called already, so it can resume from where it was before).
So you (take 10 (whole-numbers))
– assume whole-numbers
is a lazy sequence of whole numbers. That means you're forcing evaluation of thunks 10 times (though internally this may be a little difference depending on optimizations.
• What makes lazy sequences so efficient that they don't consume much stack?
This becomes clearer once you read the previous answer (I hope): unless you call for something in particular, nothing is evaluated. When you call for something, each element of the sequence can be evaluated individually, then discarded.
If the sequence is not lazy, oftentimes it is holding onto its head, which consumes heap space. If it is lazy, it is computed, then discarded, as it is not required for subsequent computations.
• How come you can wrap recursive calls in a lazy sequence and no longer get a stack over flow for large computations?
See the previous answer and consider: the lazy-seq
macro (from the documentation) will
will invoke the body only the first time seq
is called, and will cache the result and return it on all subsequent
seq calls.
Check out the filter
function for a cool LS that uses recursion:
(defn filter
"Returns a lazy sequence of the items in coll for which
(pred item) returns true. pred must be free of side-effects."
[pred coll]
(let [step (fn [p c]
(when-let [s (seq c)]
(if (p (first s))
(cons (first s) (filter p (rest s)))
(recur p (rest s)))))]
(lazy-seq (step pred coll))))
• What resources do lazy sequences consume to do what it does?
I'm not quite sure what you're asking here. LSs require memory and CPU cycles. They just don't keep banging the stack, and filling it up with results of the computations required to get the sequence elements.
• In what scenarios are lazy sequences inefficient?
When you're using small seqs that are fast to compute and won't be used much, making it an LS is inefficient because it requires another couple chars to create.
In all seriousness, unless you're trying to make something extremely performant, LSs are the way to go.
• In what scenarios are lazy sequences most efficient?
When you're dealing with seqs that are huge and you're only using bits and pieces of them, that is when you get the most benefit from using them.
Really, it's pretty much always better to use LSs over non-LSs, in terms of convenience, ease of understanding (once you get the hang of them) and reasoning about your code, and speed.
I know that lazy sequences only evaluate the items in the sequence that are asked for, how does it do this?
I think the previously posted answers already do a good job explaining this part. I'll only add that the "forcing" of a lazy sequence is an implicit -- paren-free! :-) -- function call; perhaps this way of thinking about it will make some things clearer. Also note that forcing a lazy sequence involves a hidden mutation -- the thunk being forced needs to produce a value, store it in a cache (mutation!) and throw away its executable code, which will not be required again (mutation again!).
I know that lazy sequences only evaluate the items in the sequence that are asked for, how does it do this?
What makes lazy sequences so efficient that they don't consume much stack?
What resources do lazy sequences consume to do what it does?
They don't consume stack, because they consume heap instead. A lazy sequence is a data structure, living on the heap, which contains a small bit of executable code which can be called to produce more of the data structure if/when that is required.
How come you can wrap recursive calls in a lazy sequence and no longer get a stack over flow for large computations?
Firstly, as mentioned by dbyrne, you can very well get an SO when working with lazy sequences if the thunks themselves need to execute code with a very deeply nested call structure.
However, in a certain sense you can use lazy seqs in place of tail recursion, and to the degree that this works for you you can say that they help in avoiding SOs. In fact, rather importantly, functions producing lazy sequences should not be tail recursive; the conservation of stack space with lazy seq producers arises from the aforementioned stack -> heap transfer and any attempts to write them in a tail recursive fashion will only break things.
The key insight is that a lazy sequence is an object which, when first created, doesn't hold any items (as a strict sequence always does); when a function returns a lazy sequence, only this "lazy sequence object" is returned to the caller, before any forcing takes place. Thus the stack frame used up by the call which returned the lazy sequence is popped before any forcing takes place. Let's have a look at an example producer function:
(defn foo-producer [] ; not tail recursive...
(lazy-seq
(cons :foo ; because it returns the value of the cons call...
(foo-producer)))) ; which wraps a non-tail self-call
This works because lazy-seq
returns immediately, thus (cons :foo (foo-producer))
also returns immediately and the stack frame used up by the outer call to foo-producer
is immediately popped. The inner call to foo-producer
is hidden in the rest
part of the sequence, which is a thunk; if/when that thunk is forced, it will briefly use up its own frame on the stack, but then return immediately as described above etc.
Chunking (mentioned by dbyrne) changes this picture very slightly, because a larger number of elements gets produced at each step, but the principle remains the same: each step used up some stack when the corresponding elements of the lazy seq are being produced, then that stack is reclaimed before more forcing takes place.
In what scenarios are lazy sequences inefficient?
In what scenarios are lazy sequences most efficient?
There's no point to being lazy if you need to hold the entire thing at once anyway. A lazy sequence makes a heap allocation at every step when not chunked or at every chunk -- once every 32 steps -- when chunked; avoiding that can net you a performance gain in some situations.
However, lazy sequences enable a pipelined mode of data processing:
(->> (lazy-seq-producer) ; possibly (->> (range)
(a-lazy-seq-transformer-function) ; (filter even?)
(another-transformer-function)) ; (map inc))
Doing this in a strict way would allocate plenty of heap anyway, because you'd have to keep the intermediate results around to pass them to the next processing stage. Moreover, you'd need to keep the whole thing around, which is actually impossible in the case of (range)
-- an infinite sequence! -- and when it is possible, it is usually inefficient.
Originally, lazy sequences in Clojure were evaluated item-by-item as they were needed. Chunked sequences were added in Clojure 1.1 to improve performance. Instead of item-by-item evaluation, "chunks" of 32 elements are evaluated at a time. This reduces the overhead that lazy evaluation incurs. Also, it allows clojure to take advantage of the underlying data structures. For example, PersistentVector
is implemented as a tree of 32 element arrays. This means that to access an element, you must traverse the tree until the appropriate array is found. With chunked sequences, entire arrays are grabbed at a time. This means each of the 32 elements can be retrieved before the tree needs to be re-traversed.
There has been discussion about providing a way to force item-by-item evaluation in situations where full laziness is required. However, I don't think it has been added to the language yet.
How come you can wrap recursive calls in a lazy sequence and no longer get a stack over flow for large computations?
Do you have an example of what you mean? If you have a recursive binding to a lazy-seq, it can definitely cause a stack overflow.