Retrieving the top 100 numbers from one hundred million of numbers

Run them all through a min-heap of size 100: for each input number k, replace the current min m with max(k, m). Afterwards the heap holds the 100 largest inputs.

A search engine like Lucene can use this method, with refinements, to choose the most-relevant search answers.

Edit: I fail the interview -- I got the details wrong twice (after having done this before, in production). Here's code to check it; it's almost the same as Python's standard heapq.nlargest():

import heapq

def funnel(n, numbers):
    if n == 0: return []
    heap = numbers[:n]
    heapq.heapify(heap)
    for k in numbers[n:]:
        if heap[0] < k:
            heapq.heapreplace(heap, k)
    return heap

>>> funnel(4, [3,1,4,1,5,9,2,6,5,3,5,8])
[5, 8, 6, 9]

Ok, here is a really stupid answer, but it is a valid one:

  • Load all 100 million entries into an array
  • Call some quick sort implementation on it
  • Take last 100 items (it sorts ascending), or first 100 if you can sort descending.

Reasoning:

  • There is no context on the question, so efficiency can be argued - what IS efficient? Computer time or programmer time?
  • This method is implementable very fast.
  • 100 million entries - numbers, are just a couple of hundred mb, so every decent workstaiton can simply run that.

It is an ok solution for some sort of one time operation. It would suck running it x times per second or something. But then, we need more context - as mclientk also had with his simple SQL statement - assuming 100 million numbersdo not exist in memory is a feasible question, because... they may come from a database and most of the time will, when talking about business relevant numbers.

As such, the question is really hard to answer - efficiency first has to be defined.


Mergesort in batches of 100, then only keep the top 100.

Incidentally, you can scale this in all sorts of directions, including concurrently.


If the data is already in an array that you can modify, you could use a variant of Hoare's Select algorithm, which is (in turn) a variant of Quicksort.

The basic idea is pretty simple. In Quicksort, you partition the array into two pieces, one of items larger than the pivot, and the other of items smaller than the pivot. Then you recursively sort each partition.

In the Select algorithm, you do the partitioning step exactly as before -- but instead of recursively sorting both partitions, you look at which partition contains the elements you want, and recursively select ONLY in that partition. E.g., assuming your 100 million items partition nearly in half, the first several iterations you're going to look only at the upper partition.

Eventually, you're likely to reach a point where the portion you want "bridges" two partitions -- e.g., you have a partition of ~150 numbers, and when you partition that you end up with two pieces of ~75 apiece. At that point, only one minor detail changes: instead of rejecting one partition and continuing work only the other, you accept the upper partition of 75 items, and then continue looking for the top 25 in the lower partition.

If you were doing this in C++, you could do this with std::nth_element (which will normally be implemented approximately as described above). On average, this has linear complexity, which I believe is about as good as you can hope for (absent some preexisting order, I don't see any way to find the top N elements without looking at all the elements).

If the data's not already in an array, and you're (for example) reading the data from a file, you usually want to use a heap. You basically read an item, insert it into the heap, and if the heap is larger than you target (100 items, in this case) you remove one and re-heapify.

What's probably not so obvious (but is actually true) is that you don't normally want to use a max-heap for this task. At first glance, it seems pretty obvious: if you want to get the maximum items you should use a max heap.

It's simpler, however, to think in terms of the items you're "removing" from the heap. A max heap lets you find the one largest item in the heap quickly. It is not, however, optimized for finding the smallest item in the heap.

In this case, we're interested primarily in the smallest item in the heap. In particular, when we read each item in from the file, we want to compare it to the smallest item in the heap. If (and only if) it's larger than the smallest item in the heap, we want to replace that smallest item currently in the heap with the new item. Since that's (by definition) larger than the existing item, we'll then need to sift that into the correct position in the heap.

But note: if the items in the file are randomly ordered, as we read through the file, we fairly quickly reach a point at which most items we read into the file will be smaller than the smallest item in our heap. Since we have easy access to the smallest item in the heap, it's fairly quick and easy to do that comparison, and for smaller items never insert in the heap at all.