Is recursion ever faster than looping?
This depends on the language being used. You wrote 'language-agnostic', so I'll give some examples.
In Java, C, and Python, recursion is fairly expensive compared to iteration (in general) because it requires the allocation of a new stack frame. In some C compilers, one can use a compiler flag to eliminate this overhead, which transforms certain types of recursion (actually, certain types of tail calls) into jumps instead of function calls.
In functional programming language implementations, sometimes, iteration can be very expensive and recursion can be very cheap. In many, recursion is transformed into a simple jump, but changing the loop variable (which is mutable) sometimes requires some relatively heavy operations, especially on implementations which support multiple threads of execution. Mutation is expensive in some of these environments because of the interaction between the mutator and the garbage collector, if both might be running at the same time.
I know that in some Scheme implementations, recursion will generally be faster than looping.
In short, the answer depends on the code and the implementation. Use whatever style you prefer. If you're using a functional language, recursion might be faster. If you're using an imperative language, iteration is probably faster. In some environments, both methods will result in the same assembly being generated (put that in your pipe and smoke it).
Addendum: In some environments, the best alternative is neither recursion nor iteration but instead higher order functions. These include "map", "filter", and "reduce" (which is also called "fold"). Not only are these the preferred style, not only are they often cleaner, but in some environments these functions are the first (or only) to get a boost from automatic parallelization — so they can be significantly faster than either iteration or recursion. Data Parallel Haskell is an example of such an environment.
List comprehensions are another alternative, but these are usually just syntactic sugar for iteration, recursion, or higher order functions.
is recursion ever faster than a loop?
No, Iteration will always be faster than Recursion. (in a Von Neumann Architecture)
Explanation:
If you build the minimum operations of a generic computer from scratch, "Iteration" comes first as a building block and is less resource intensive than "recursion", ergo is faster.
Building a pseudo-computing-machine from scratch:
Question yourself: What do you need to compute a value, i.e. to follow an algorithm and reach a result?
We will establish a hierarchy of concepts, starting from scratch and defining in first place the basic, core concepts, then build second level concepts with those, and so on.
First Concept: Memory cells, storage, State. To do something you need places to store final and intermediate result values. Let’s assume we have an infinite array of "integer" cells, called Memory, M[0..Infinite].
-
Instructions: do something - transform a cell, change its value. alter state. Every interesting instruction performs a transformation. Basic instructions are:
a) Set & move memory cells
- store a value into memory, e.g.: store 5 m[4]
- copy a value to another position: e.g.: store m[4] m[8]
b) Logic and arithmetic
- and, or, xor, not
- add, sub, mul, div. e.g. add m[7] m[8]
An Executing Agent: a core in a modern CPU. An "agent" is something that can execute instructions. An Agent can also be a person following the algorithm on paper.
-
Order of steps: a sequence of instructions: i.e.: do this first, do this after, etc. An imperative sequence of instructions. Even one line expressions are "an imperative sequence of instructions". If you have an expression with a specific "order of evaluation" then you have steps. It means than even a single composed expression has implicit “steps” and also has an implicit local variable (let’s call it “result”). e.g.:
4 + 3 * 2 - 5 (- (+ (* 3 2) 4 ) 5) (sub (add (mul 3 2) 4 ) 5)
The expression above implies 3 steps with an implicit "result" variable.
// pseudocode 1. result = (mul 3 2) 2. result = (add 4 result) 3. result = (sub result 5)
So even infix expressions, since you have a specific order of evaluation, are an imperative sequence of instructions. The expression implies a sequence of operations to be made in a specific order, and because there are steps, there is also an implicit "result" intermediate variable.
-
Instruction Pointer: If you have a sequence of steps, you have also an implicit "instruction pointer". The instruction pointer marks the next instruction, and advances after the instruction is read but before the instruction is executed.
In this pseudo-computing-machine, the Instruction Pointer is part of Memory. (Note: Normally the Instruction Pointer will be a “special register” in a CPU core, but here we will simplify the concepts and assume all data (registers included) are part of “Memory”)
Jump - Once you have an ordered number of steps and an Instruction Pointer, you can apply the "store" instruction to alter the value of the Instruction Pointer itself. We will call this specific use of the store instruction with a new name: Jump. We use a new name because is easier to think about it as a new concept. By altering the instruction pointer we're instructing the agent to “go to step x“.
-
Infinite Iteration: By jumping back, now you can make the agent "repeat" a certain number of steps. At this point we have infinite Iteration.
1. mov 1000 m[30] 2. sub m[30] 1 3. jmp-to 2 // infinite loop
Conditional - Conditional execution of instructions. With the "conditional" clause, you can conditionally execute one of several instructions based on the current state (which can be set with a previous instruction).
-
Proper Iteration: Now with the conditional clause, we can escape the infinite loop of the jump back instruction. We have now a conditional loop and then proper Iteration
1. mov 1000 m[30] 2. sub m[30] 1 3. (if not-zero) jump 2 // jump only if the previous // sub instruction did not result in 0 // this loop will be repeated 1000 times // here we have proper ***iteration***, a conditional loop.
-
Naming: giving names to a specific memory location holding data or holding a step. This is just a "convenience" to have. We do not add any new instructions by having the capacity to define “names” for memory locations. “Naming” is not a instruction for the agent, it’s just a convenience to us. Naming makes code (at this point) easier to read and easier to change.
#define counter m[30] // name a memory location mov 1000 counter loop: // name a instruction pointer location sub counter 1 (if not-zero) jmp-to loop
-
One-level subroutine: Suppose there’s a series of steps you need to execute frequently. You can store the steps in a named position in memory and then jump to that position when you need to execute them (call). At the end of the sequence you'll need to return to the point of calling to continue execution. With this mechanism, you’re creating new instructions (subroutines) by composing core instructions.
Implementation: (no new concepts required)
- Store the current Instruction Pointer in a predefined memory position
- jump to the subroutine
- at the end of the subroutine, you retrieve the Instruction Pointer from the predefined memory location, effectively jumping back to the following instruction of the original call
Problem with the one-level implementation: You cannot call another subroutine from a subroutine. If you do, you'll overwrite the returning address (global variable), so you cannot nest calls.
To have a better Implementation for subroutines: You need a STACK
Stack: You define a memory space to work as a "stack", you can “push” values on the stack, and also “pop” the last “pushed” value. To implement a stack you'll need a Stack Pointer (similar to the Instruction Pointer) which points to the actual “head” of the stack. When you “push” a value, the stack pointer decrements and you store the value. When you “pop”, you get the value at the actual Stack Pointer and then the Stack Pointer is incremented.
Subroutines Now that we have a stack we can implement proper subroutines allowing nested calls. The implementation is similar, but instead of storing the Instruction Pointer in a predefined memory position, we "push" the value of the IP in the stack. At the end of the subroutine, we just “pop” the value from the stack, effectively jumping back to the instruction after the original call. This implementation, having a “stack” allows calling a subroutine from another subroutine. With this implementation we can create several levels of abstraction when defining new instructions as subroutines, by using core instructions or other subroutines as building blocks.
-
Recursion: What happens when a subroutine calls itself?. This is called "recursion".
Problem: Overwriting the local intermediate results a subroutine can be storing in memory. Since you are calling/reusing the same steps, if the intermediate result are stored in predefined memory locations (global variables) they will be overwritten on the nested calls.
Solution: To allow recursion, subroutines should store local intermediate results in the stack, therefore, on each recursive call (direct or indirect) the intermediate results are stored in different memory locations.
...
having reached recursion we stop here.
Conclusion:
In a Von Neumann Architecture, clearly "Iteration" is a simpler/basic concept than “Recursion". We have a form of "Iteration" at level 7, while "Recursion" is at level 14 of the concepts hierarchy.
Iteration will always be faster in machine code because it implies less instructions therefore less CPU cycles.
Which one is "better"?
You should use "iteration" when you are processing simple, sequential data structures, and everywhere a “simple loop” will do.
You should use "recursion" when you need to process a recursive data structure (I like to call them “Fractal Data Structures”), or when the recursive solution is clearly more “elegant”.
Advice: use the best tool for the job, but understand the inner workings of each tool in order to choose wisely.
Finally, note that you have plenty of opportunities to use recursion. You have Recursive Data Structures everywhere, you’re looking at one now: parts of the DOM supporting what you are reading are a RDS, a JSON expression is a RDS, the hierarchical file system in your computer is a RDS, i.e: you have a root directory, containing files and directories, every directory containing files and directories, every one of those directories containing files and directories...
Recursion may well be faster where the alternative is to explicitly manage a stack, like in the sorting or binary tree algorithms you mention.
I've had a case where rewriting a recursive algorithm in Java made it slower.
So the right approach is to first write it in the most natural way, only optimize if profiling shows it is critical, and then measure the supposed improvement.