Looking to understand the rationale for money denomination
Solution 1:
The greedy solution is not optimal in general, even for the case when the value of any coin is bigger than the sum of all coins of lower denomination. For example, suppose we have coins of denominations 25, 9, 4, and 1. For 37 cents, the greedy solution uses five coins: one 25, one 9, and three 1s. However, the optimal solution is four coins: one 25 and three 4s.
The problem of finding the minimum number of coins given a set of denominations is called the change-making problem. It's a variant of the famous knapsack problem and is known to be difficult in the general case.
An integer-programming formulation for the change-making problem was given in the answer to this math.SE question: On problems of coins totaling to a given amount.
Added: While the Wikipedia article on the change-making problem claims that the greedy algorithm is optimal for U.S. coins, it provides no proof. The article "Canonical Coin Systems for Change-Making Problems" discusses characterizations for when the greedy algorithm is optimal for coin systems of up to five denominations. It also implies that for systems of larger than five denominations determining when the greedy algorithm is optimal is still an open problem.
Solution 2:
The paper D. Pearson. A Polynomial-time Algorithm for the Change-Making Problem. Operations Reseach Letters, 33(3):231-234, 2005 offers an $O(n^3)$ algorithm for deciding whether a coin system has the property you want, where $n$ is the number of different kinds of coins. From the abstract:
We then derive a set of $O(n^2)$ possible values which must contain the smallest counterexample. Each can be tested with $O(n)$ arithmetic operations, giving us an $O(n^3)$ algorithm.
The paper is quite short.
Solution 3:
Here's an algorithm I think should decide if, for a given set of denominations, the greedy change making algorithm is optimal. coins(value, set of denominations) is a function that returns the minimal number of coins needed to make the value with those denominations.
Given set D of denominations
let m = largest element of D
for each value v from 2*m - 1 down to 1
let d = largest element of D <= v
if (coins(v-d,D) >= coins(v, D-d))
return false
return true
This doesn't do much to tell you what properties will result in the greedy algorithm being optimal, but it shows that the property is easily computable.
Edit: A dynamic programming solution for function coins.
Without a loss of generality, let's impose an order on the way you can pick coins. Once you drop down to a lower denomination, you can't go back above that denomination. Given this, its obvious that you always have two choices, pick a coin of the denomination you are on, or drop down to a lower denomination. This means the optimal number of coins is coins(v, D) = min(coins(v-d,D)+1,coins(v,D-d)), where d is the the current, highest denomination. coins(v=0,D) = 0, coins(v<0,D) = infinity, and coins(v>0,D={}) = infinity.
We can calculate coins(v,D) by first calculating all possible subproblems. Construct a two dimensional array DP with |D| rows and 2*m columns. DP[i][j] represents the minimal number of coins it takes to make value j using only coins up to the ith denomination.
(note, pseudocode uses 0 indexed arrays)
array DP[D.size][2*m] initialized to -1
for(int i=0; i<D.size; i++)
for(int j=0; j<2*m; j++)
DP[i][j] = calc_coins(j,i)
function calc_coins(int v, int d)
if(v<0 || d<0)
return infinity
if(v==0)
return 0
if(DP[d][v] >= 0)
return DP[d][v]
int pick = calc_coins(v-D[d], d) + 1
int drop_down = calc_coins(v,d-1)
return min(pick, drop_down)
Once this table is calculated, all calls to coins(v,D) that will be made in the outer algorithm will be constant time. A call to coins(v,D) is equivalent to a table lookup at DP[D.size - 1][v] because we are always shaving off the highest denominations in D in the outer algorithm.