There is a way to reduce this to a simple sub-problem.

There are 2 parts to the explanation, the algorithm, and the reason the algorithm provides an optimal solution. The first won't make sense without the second, so I'll start with the why.

If you think of bombing the rectangle (assume a big rectangle - no edge cases yet) you can see that the only way to reduce the hollow rectangle of squares on the perimeter to 0 is to bomb either the perimeter or to bomb the hollow rectangle of squares just inside the perimeter. I'll call the perimeter layer 1, and the rectangle inside it layer 2.

An important insight is that there is no point bombing layer 1, because the "blast radius" you get from doing so is always contained within the blast radius of another square from layer 2. You should be able to easily convince yourself of this.

So, we can reduce the problem to finding an optimal way to bomb away the perimeter, then we can repeat that until all squares are 0.

But of course, that won't always find an optimal solution if it's possible to bomb away the perimeter in a less than optimal fashion, but by using X extra bombs make the problem of reducing the inner layer simpler by >X bombs. So, if we call the permiter layer one, if we place an extra X bombs somewhere in layer 2 (just inside layer 1), can we reduce the effort of later bombing away layer 2 by more than X? In other words, we have to prove we can be greedy in reducing the outer perimeter.

But, we do know we can be greedy. Because no bomb in layer 2 can ever be more efficient in reducing layer 2 to 0 than a strategically placed bomb in layer 3. And for the same reason as before - there is always a bomb we can place in layer 3 that will affect every square of layer 2 that a bomb placed in layer 2 can. So, it can never harm us to be greedy (in this sense of greedy).

So, all we have to do is find the optimal way to reduce the permiter to 0 by bombing the next inner layer.

We are never hurt by first bombing the corner to 0, because only the corner of the inner layer can reach it, so we really have no choice (and, any bomb on the perimeter that can reach the corner has a blast radius contained in the blast radius from the corner of the inner layer).

Once we have done so, the squares on the perimeter adjacent to the 0 corner can only be reached by 2 squares from the inner layer:

0       A       B

C       X       Y

D       Z

At this point the perimeter is effectively a closed 1 dimensional loop, because any bomb will reduce 3 adjacent squares. Except for some weirdness near the corners - X can "hit" A,B,C,and D.

Now we can't use any blast radius tricks - the situation of each square is symmetric, except for the weird corners, and even there no blast radius is a subset of another. Note that if this were a line (as Colonel Panic discusses) instead of a closed loop the solution is trivial. The end points must be reduced to 0, and it never harms you to bomb the points adjacent to the end points, again because the blast radius is a superset. Once you have made your endpoint 0, you still have a new endpoint, so repeat (until the line is all 0).

So, if we can optimally reduce a single square in the layer to 0 we have an algorithm (because we have cut the loop and now have a straight line with endpoints). I believe bombing adjacent to the square with the lowest value (giving you 2 options) such that the highest value within 2 squares of that lowest value is the minimum possible (you may have to split your bombing to manage this) will be optimal but I don't (yet?) have a proof.


Pólya says "If you can't solve a problem, then there is an easier problem you can solve: find it."

The obvious simpler problem is the 1-dimensional problem (when the grid is a single row). Let's start with the simplest algorithm - greedily bombing the biggest target. When does this go wrong?

Given 1 1 1, the greedy algorithm is indifferent to which cell it bombs first. Of course, the centre cell is better - it zeros all three cells at once. This suggests a new algorithm A, "bomb to minimise the sum remaining". When does this algorithm go wrong?

Given 1 1 2 1 1, algorithm A is indifferent between bombing the 2nd, 3rd or 4th cells. But bombing the 2nd cell to leave 0 0 1 1 1 is better than bombing the 3rd cell to leave 1 0 1 0 1. How to fix that? The problem with bombing the 3rd cell is that it leaves us work to the left and work to the right which must be done separately.

How about "bomb to minimise the sum remaining, but maximise the minimum to the left (of where we bombed) plus the minimum to the right". Call this algorithm B. When does this algorithm go wrong?


Edit: After reading the comments, I agree a much more interesting problem would be the one dimensional problem changed so that the ends join up. Would love to see any progress on that.


I had to stop at only a partial solution since I was out of time, but hopefully even this partial solution provides some insights on one potential approach to solving this problem.

When faced with a hard problem, I like to come up with simpler problems to develop an intuition about the problem space. Here, the first step I took was to reduce this 2-D problem into a 1-D problem. Consider a line:

0 4 2 1 3 0 1

Somehow or another, you know you will need to bomb at or around the 4 spot 4 times to get it down to 0. Since left of the spot is a lower number, there is no benefit to bombing the 0 or the 4 over bombing the 2. In fact, I believe (but lack a rigorous proof) that bombing the 2 until the 4 spot goes down to 0 is at least as good as any other strategy to get that 4 down to 0. One can proceed down the line left to right in a strategy like this:

index = 1
while index < line_length
  while number_at_index(index - 1) > 0
    bomb(index)
  end
  index++
end
# take care of the end of the line
while number_at_index(index - 1) > 0
  bomb(index - 1)
end

A couple sample bombing orders:

0 4[2]1 3 0 1
0 3[1]0 3 0 1
0 2[0]0 3 0 1
0 1[0]0 3 0 1
0 0 0 0 3[0]1
0 0 0 0 2[0]0
0 0 0 0 1[0]0
0 0 0 0 0 0 0

4[2]1 3 2 1 5
3[1]0 3 2 1 5
2[0]0 3 2 1 5
1[0]0 3 2 1 5
0 0 0 3[2]1 5
0 0 0 2[1]0 5
0 0 0 1[0]0 5
0 0 0 0 0 0[5]
0 0 0 0 0 0[4]
0 0 0 0 0 0[3]
0 0 0 0 0 0[2]
0 0 0 0 0 0[1]
0 0 0 0 0 0 0

The idea of starting with a number that needs to go down some way or another is an appealing one because it suddenly becomes attainable to find a solution that as some claim to being at least as good as all other solutions.

The next step up in complexity where this search of at least as good is still feasible is on the edge of the board. It is clear to me that there is never any strict benefit to bomb the outer edge; you're better off bombing the spot one in and getting three other spaces for free. Given this, we can say that bombing the ring one inside of the edge is at least as good as bombing the edge. Moreover, we can combine this with the intuition that bombing the right one inside of the edge is actually the only way to get edge spaces down to 0. Even more, it is trivially simple to figure out the optimal strategy (in that it is at least as good as any other strategy) to get corner numbers down to 0. We put this all together and can get much closer to a solution in the 2-D space.

Given the observation about corner pieces, we can say for sure that we know the optimal strategy to go from any starting board to a board with zeros on all corners. This is an example of such a board (I borrowed the numbers from the two linear boards above). I've labelled some spaces differently, and I'll explain why.

0 4 2 1 3 0 1 0
4 x x x x x x 4
2 y y y y y y 2
1 y y y y y y 1
3 y y y y y y 3
2 y y y y y y 2
1 y y y y y y 1
5 y y y y y y 5
0 4 2 1 3 0 1 0

One will notice at the top row really closely resembles the linear example we saw earlier. Recalling our earlier observation that the optimal way to get the top row all down to 0 is to bomb the second row (the x row). There is no way to clear the top row by bombing any of the y rows and no additional benefit to bombing the top row over bombing the corresponding space on the x row.

We could apply the linear strategy from above (bombing the corresponding spaces on the x row), concerning ourselves only with the top row and nothing else. It would go something like this:

0 4 2 1 3 0 1 0
4 x[x]x x x x 4
2 y y y y y y 2
1 y y y y y y 1
3 y y y y y y 3
2 y y y y y y 2
1 y y y y y y 1
5 y y y y y y 5
0 4 2 1 3 0 1 0

0 3 1 0 3 0 1 0
4 x[x]x x x x 4
2 y y y y y y 2
1 y y y y y y 1
3 y y y y y y 3
2 y y y y y y 2
1 y y y y y y 1
5 y y y y y y 5
0 4 2 1 3 0 1 0

0 2 0 0 3 0 1 0
4 x[x]x x x x 4
2 y y y y y y 2
1 y y y y y y 1
3 y y y y y y 3
2 y y y y y y 2
1 y y y y y y 1
5 y y y y y y 5
0 4 2 1 3 0 1 0

0 1 0 0 3 0 1 0
4 x[x]x x x x 4
2 y y y y y y 2
1 y y y y y y 1
3 y y y y y y 3
2 y y y y y y 2
1 y y y y y y 1
5 y y y y y y 5
0 4 2 1 3 0 1 0

0 0 0 0 3 0 1 0
4 x x x x x x 4
2 y y y y y y 2
1 y y y y y y 1
3 y y y y y y 3
2 y y y y y y 2
1 y y y y y y 1
5 y y y y y y 5
0 4 2 1 3 0 1 0

The flaw in this approach becomes very obvious in the final two bombings. It is clear, given that the only bomb sites that reduce the 4 figure in the first column in the second row are the first x and the y. The final two bombings are clearly inferior to just bombing the first x, which would have done the exact same (with regard to the first spot in the top row, which we have no other way of clearing). Since we have demonstrated that our current strategy is suboptimal, a modification in strategy is clearly needed.

At this point, I can take a step back down in complexity and focus just one one corner. Let's consider this one:

0 4 2 1
4 x y a
2 z . .
1 b . .

It is clear the only way to get the spaces with 4 down to zero are to bomb some combination of x, y, and z. With some acrobatics in my mind, I'm fairly sure the optimal solution is to bomb x three times and then a then b. Now it's a matter of figuring out how I reached that solution and if it reveals any intuition we can use to even solve this local problem. I notice that there's no bombing of y and z spaces. Attempting to find a corner where bombing those spaces makes sense yields a corner that looks like this:

0 4 2 5 0
4 x y a .
2 z . . .
5 b . . .
0 . . . .

For this one, it is clear to me that the optimal solution is to bomb y 5 times and z 5 times. Let's go one step further.

0 4 2 5 6 0 0
4 x y a . . .
2 z . . . . .
5 b . . . . .
6 . . . . . .
0 . . . . . .
0 . . . . . .

Here, it feels similarly intuitive that the optimal solution is to bomb a and b 6 times and then x 4 times.

Now it becomes a game of how to turn those intuitions into principles we can build on.

Hopefully to be continued!


For updated question a simple greedy algorithm gives optimal result.

Drop A[0,0] bombs to cell A[1,1], then drop A[1,0] bombs to cell A[2,1], and continue this process downwards. To clean bottom left corner, drop max(A[N-1,0], A[N-2,0], A[N-3,0]) bombs to the cell A[N-2,1]. This will completely clean up first 3 columns.

With the same approach clean columns 3,4,5, then columns 6,7,8, etc.

Unfortunately this does not help finding solution for the original problem.


"Larger" problem (without "nonicreasing" constraint) may be proven to be NP-hard. Here is sketch of a proof.

Suppose we have a planar graph of degree up to 3. Let's find minimum vertex cover for this graph. According to Wikipedia article this problem is NP-hard for planar graphs of degree up to 3. This could be proven by reduction from Planar 3SAT. And hardness of Planar 3SAT - by reduction from 3SAT. Both these proofs are presented in recent lectures in "Algorithmic Lower Bounds" by prof. Erik Demaine (lectures 7 and 9).

If we split some edges of the original graph (left graph on the diagram), each one with even number of additional nodes, the resulting graph (right graph on the diagram) should have exactly the same minimum vertex cover for original vertices. Such transformation allows to align graph vertices to arbitrary positions on the grid.

enter image description here

If we place graph vertices only to even rows and columns (in such a way that no two edges incident to one vertex form an acute angle), insert "ones" wherever there is an edge, and insert "zeros" to other grid positions, we could use any solution for the original problem to find minimum vertex cover.


You can represent this problem as integer programming problem. (this is just one of possible solutions to approach this problem)

Having points:

a b c d
e f g h
i j k l
m n o p

one can write 16 equations where for point f for example holds

f <= ai + bi + ci + ei + fi + gi + ii + ji + ki   

minimaised over sum of all indexes and integer solution.

Solution is of course sum of this indexes.

This can be further simplified by setting all xi on boundaries 0, so you end up having 4+1 equation in this example.

Problem is that there is no trivial algorhitm for solving such problems. tI am not expert on this, but solving this problem as linear programming is NP hard.