What's the difference between recursion, memoization & dynamic programming? [duplicate]
Possible Duplicate:
Dynamic programming and memoization: top-down vs bottom-up approaches
I have gone through a lot of articles on this but can't seem to make sense of it. At times recursion and dynamic programming looks the same and at others memoization & dynamic programming look alike. Can someone explain to me what's the difference?
P.S. It will also be helpful if you could point me to some code using the three approaches on the same problem. (e.g. the Fibonacci series problem, I think every article I read used recursion but referred to it as dynamic programming)
Consider calculating the fibonacci sequence:
Pure recursion:
int fib(int x)
{
if (x < 2)
return 1;
return fib(x-1) + fib(x-2);
}
results in exponential number of calls.
Recursion with memoization/DP:
int fib(int x)
{
static vector<int> cache(N, -1);
int& result = cache[x];
if (result == -1)
{
if (x < 2)
result = 1;
else
result = fib(x-1) + fib(x-2);
}
return result;
}
Now we have linear number of calls the first time, and constant thereafter.
The above method is called "lazy". We calculate the earlier terms the first time they are asked for.
The following would also be considered DP, but without recursion:
int fibresult[N];
void setup_fib()
{
fibresult[0] = 1;
fibresult[1] = 1;
for (int i = 2; i < N; i++)
fibresult[i] = fibresult[i-1] + fibresult[i-2];
}
int fib(int x) { return fibresult[x]; }
This way may be described as "eager", "precaching" or "iterative". Its faster overall but we have to manually figure out the order the subproblems need to be calculated in. This is easy for fibonacci, but for more complex DP problems it gets harder, and so we fall back to the lazy recursive method if it is fast enough.
Also the following is neither recursion nor DP:
int fib(int x)
{
int a = 1;
int b = 1;
for (int i = 2; i < x; i++)
{
a = a + b;
swap(a,b);
}
return b;
}
It uses constant space and linear time.
Also I will mention for the sake of completeness there is a closed form for fibonacci that uses nether recursion nor DP that allows us to calculate in constant time the fibonacci term using a mathematic formula based on the golden ratio:
http://www.dreamincode.net/forums/topic/115550-fibonacci-closed-form/
Well, recursion+memoization is precisely a specific "flavor" of dynamic programming: dynamic programming in accordance with top-down approach.
More precisely, there's no requrement to use recursion specifically. Any divide & conquer solution combined with memoization is top-down dynamic programming. (Recursion is LIFO flavor of divide & conquer, while you can also use FIFO divide & conquer or any other kind of divide & conquer).
So it is more correct to say that
divide & conquer + memoization == top-down dynamic programming
Also, from a very formal point of view, if you implement a divide & conquer solution for a problem that does not generate repetitive partial solutions (meaning that there's no benefit in memoization), then you can claim that this divide & conquer solution is a degenerate example of "dynamic programming".
However, dynamic programming is a more general concept. Dynamic programming can use bottom-up approach, which is different from divide & conquer+memoization.
I'm sure you can find detailed definition over internet. Here is my attempt to simplify things.
Recursion is calling itself again.
Dynamic Programming is a way to solve problems which exhibit a specific structure (optimal sub structure) where a problem can be broken down into sub problems which are similar to original problem. Clearly one can invoke recursion to solve a DP. But it is not necessary. One can solve a DP without recursion.
Memoization is a way to optimize DP algorithms which rely on recursion. The point is not to solve the sub problem again which has been already solved. You can view it as cache of solutions to sub problems.
They are different concepts. They overlap pretty often, but they are different.
Recursion happens whenever a function calls itself, directly or indirectly. This is all.
Example:
a -> call a
a -> call b, b -> call c, c -> call a
Dynamic programming is when you use solutions to smaller subproblems in order to solve a larger problem. This is easiest to implement recursively because you usually think of such solutions in terms of a recursive function. An iterative implementation is usually preferred though, because it takes less time and memory.
Memoization is used to prevent recursive DP implementation from taking a lot more time than needed. Most times, a DP algorithm will use the same subproblem in solving multiple large problems. In a recursive implementation, this means we will recompute the same thing multiple times. Memoization implies saving the results of these subproblems into a table. When entering a recursive call, we check if its result exists in the table: if yes, we return it, if not, we compute it, save it in the table, and then return it.