Find local minimum in n x n matrix in O(n) time

So, this is not my home work question, but it is taken from an ungraded homework of the coursera course on algorithms and data structures (which is now complete).

You are given an n by n grid of distinct numbers. A number is a local minimum if it is smaller than all of its neighbors. (A neighbor of a number is one immediately above, below, to the left, or the right. Most numbers have four neighbors; numbers on the side have three; the four corners have two.) Use the divide-and-conquer algorithm design paradigm to compute a local minimum with only O(n) comparisons between pairs of numbers. (Note: since there are n2 numbers in the input, you cannot afford to look at all of them. Hint: Think about what types of recurrences would give you the desired upper bound.)

Since the numbers are not in any order, I don't see how we can get away with any thing but O(n2) comparisons.


We can adapt Words Like Jared's answer, by looking at how it can go wrong.

The idea in that answer -- which is a good one -- is to "roll downhill". This just means, if you are on an element, check if it is a local minimum. If so, you are done; otherwise, step to the smallest of its nearest neighbors. Eventually this must terminate because every step is to a smaller element, and that cannot go on forever in a finite array.

The problem with this approach is that the "rolling" can meander all over the place:

20 100 12  11 10 100  2
19 100 13 100  9 100  3
18 100 14 100  8 100  4
17  16 15 100  7   6  5

If you start at the upper left and "roll downhill", you will visit around half of the elements in the array. That is too many, so we have to constrain it a bit.

Start by examining the middle column and middle row. Find the smallest element among all of those and start there.

Roll one step "downhill" from there to enter one of the four quadrants. You will enter one of the quadrants, because the adjacent elements in the middle column and/or row are larger, so only one of the two adjacent quadrants could be "downhill".

Now consider what would happen if you "rolled downhill" from there. Obviously, you would eventually reach a local minimum. (We will not actually do this because it would take too long.) But, in the course of rolling around, you would never leave that quadrant... Because to do so, you would have to cross either the middle column or middle row, and none of those elements are smaller than where you started. Therefore that quadrant contains a local minimum somewhere.

Thus, in linear time, we have identified a quadrant that must contain a local minimum, and we have cut n in half. Now just recurse.

This algorithm takes time 2n + 2n/2 + 2n/4 + ..., which equals 4n, which is O(n) so we are done.

Interestingly, we did not use "rolling downhill" very much at all, except for the critical part: Proving that the algorithm works.

[Update]

As Incassator points out, this answer is not entirely correct, because after you "just recurse" you might roll out of the quadrant again...

The simplest fix is to find the smallest element among the middle row, middle column, and boundary before you "roll downhill".


The accepted answer by Nemo is nice but not fully correct:

Thus, in linear time, we have identified a quadrant that must contain a local minimum, and we have cut n in half. Now just recurse.

I am referring to "just recurse" bit. The problem is we cannot do that directly because on next iteration we might find a local minimum which is not a local minimum for original grid (x below means some arbitrary large numbers):

 x  x 39  x  x 50  x  x  x  x  x
 x  x 38  x  x 49  x  x  x  x  x
37 36 33 34 35 48  x  x  x  x  x
 x  x 32  x  1 10  x  x  x  x  x
 x  x 31  x  x 47  x  x  x  x  x
46 45 30 44 43 60 51 52 53 54 55
 x  x  2  x  x 56  x  x  x  x  x
 x  x  x  x  x 57  x  x  x  x  x
 x  x  x  x  x 58  x  x  x  x  x
 x  x  x  x  x 59  x  x  x  x  x

At first iteration we find 10 to be a minimum of middle row and middle column. We go to the left (as 1 is less than 10). So our next iteration is on upper-left quadrant. But now minimum of middle row and column is going to be 31 (or 30 if quadrant's borders are considered to be part of it). You will then conclude that it is a local minimum. But it is not for the full grid.

We can rectify this unfortunate defect in variety of ways. I solved it like this:

At each iteration in addition to the grid itself we keep track of current minimum candidate (that is the 1 in the example above after first iteration; in the initial state we can say minimum candidate is plus infinity). We calculate minimum of middle row and column and compare it to minimum candidate. If the latter is smaller we recurse into the quadrant containing minimum candidate. Otherwise we forget previous candidate and only then check whether new middle row/column minimum is actually a local minimum. And if not then recurse as usual to whatever quadrant we slope down from it (and track new minimum candidate).

Alternatively, you can modify the procedure as described in this presumably MIT lecture: at each iteration instead of looking at middle row/column you can look at middle row/column and grid boundary. Then the algorithm once again is correct.

You choose which way you like.