O(nlogn) Algorithm - Find three evenly spaced ones within binary string
I had this question on an Algorithms test yesterday, and I can't figure out the answer. It is driving me absolutely crazy, because it was worth about 40 points. I figure that most of the class didn't solve it correctly, because I haven't come up with a solution in the past 24 hours.
Given a arbitrary binary string of length n, find three evenly spaced ones within the string if they exist. Write an algorithm which solves this in O(n * log(n)) time.
So strings like these have three ones that are "evenly spaced": 11100000, 0100100100
edit: It is a random number, so it should be able to work for any number. The examples I gave were to illustrate the "evenly spaced" property. So 1001011 is a valid number. With 1, 4, and 7 being ones that are evenly spaced.
Finally! Following up leads in sdcvvc's answer, we have it: the O(n log n) algorithm for the problem! It is simple too, after you understand it. Those who guessed FFT were right.
The problem: we are given a binary string S
of length n, and we want to find three evenly spaced 1s in it. For example, S
may be 110110010
, where n=9. It has evenly spaced 1s at positions 2, 5, and 8.
Scan
S
left to right, and make a listL
of positions of 1. For theS=110110010
above, we have the list L = [1, 2, 4, 5, 8]. This step is O(n). The problem is now to find an arithmetic progression of length 3 inL
, i.e. to find distinct a, b, c inL
such that b-a = c-b, or equivalently a+c=2b. For the example above, we want to find the progression (2, 5, 8).Make a polynomial
p
with terms xk for each k inL
. For the example above, we make the polynomial p(x) = (x + x2 + x4 + x5+x8). This step is O(n).Find the polynomial
q
= p2, using the Fast Fourier Transform. For the example above, we get the polynomial q(x) = x16 + 2x13 + 2x12 + 3x10 + 4x9 + x8 + 2x7 + 4x6 + 2x5 + x4 + 2x3 + x2. This step is O(n log n).Ignore all terms except those corresponding to x2k for some k in
L
. For the example above, we get the terms x16, 3x10, x8, x4, x2. This step is O(n), if you choose to do it at all.
Here's the crucial point: the coefficient of any x2b for b in L
is precisely the number of pairs (a,c) in L
such that a+c=2b. [CLRS, Ex. 30.1-7] One such pair is (b,b) always (so the coefficient is at least 1), but if there exists any other pair (a,c), then the coefficient is at least 3, from (a,c) and (c,a). For the example above, we have the coefficient of x10 to be 3 precisely because of the AP (2,5,8). (These coefficients x2b will always be odd numbers, for the reasons above. And all other coefficients in q will always be even.)
So then, the algorithm is to look at the coefficients of these terms x2b, and see if any of them is greater than 1. If there is none, then there are no evenly spaced 1s. If there is a b in L
for which the coefficient of x2b is greater than 1, then we know that there is some pair (a,c) — other than (b,b) — for which a+c=2b. To find the actual pair, we simply try each a in L
(the corresponding c would be 2b-a) and see if there is a 1 at position 2b-a in S
. This step is O(n).
That's all, folks.
One might ask: do we need to use FFT? Many answers, such as beta's, flybywire's, and rsp's, suggest that the approach that checks each pair of 1s and sees if there is a 1 at the "third" position, might work in O(n log n), based on the intuition that if there are too many 1s, we would find a triple easily, and if there are too few 1s, checking all pairs takes little time. Unfortunately, while this intuition is correct and the simple approach is better than O(n2), it is not significantly better. As in sdcvvc's answer, we can take the "Cantor-like set" of strings of length n=3k, with 1s at the positions whose ternary representation has only 0s and 2s (no 1s) in it. Such a string has 2k = n(log 2)/(log 3) ≈ n0.63 ones in it and no evenly spaced 1s, so checking all pairs would be of the order of the square of the number of 1s in it: that's 4k ≈ n1.26 which unfortunately is asymptotically much larger than (n log n). In fact, the worst case is even worse: Leo Moser in 1953 constructed (effectively) such strings which have n1-c/√(log n) 1s in them but no evenly spaced 1s, which means that on such strings, the simple approach would take Θ(n2-2c/√(log n)) — only a tiny bit better than Θ(n2), surprisingly!
About the maximum number of 1s in a string of length n with no 3 evenly spaced ones (which we saw above was at least n0.63 from the easy Cantor-like construction, and at least n1-c/√(log n) with Moser's construction) — this is OEIS A003002. It can also be calculated directly from OEIS A065825 as the k such that A065825(k) ≤ n < A065825(k+1). I wrote a program to find these, and it turns out that the greedy algorithm does not give the longest such string. For example, for n=9, we can get 5 1s (110100011) but the greedy gives only 4 (110110000), for n=26 we can get 11 1s (11001010001000010110001101) but the greedy gives only 8 (11011000011011000000000000), and for n=74 we can get 22 1s (11000010110001000001011010001000000000000000010001011010000010001101000011) but the greedy gives only 16 (11011000011011000000000000011011000011011000000000000000000000000000000000). They do agree at quite a few places until 50 (e.g. all of 38 to 50), though. As the OEIS references say, it seems that Jaroslaw Wroblewski is interested in this question, and he maintains a website on these non-averaging sets. The exact numbers are known only up to 194.
Your problem is called AVERAGE in this paper (1999):
A problem is 3SUM-hard if there is a sub-quadratic reduction from the problem 3SUM: Given a set A of n integers, are there elements a,b,c in A such that a+b+c = 0? It is not known whether AVERAGE is 3SUM-hard. However, there is a simple linear-time reduction from AVERAGE to 3SUM, whose description we omit.
Wikipedia:
When the integers are in the range [−u ... u], 3SUM can be solved in time O(n + u lg u) by representing S as a bit vector and performing a convolution using FFT.
This is enough to solve your problem :).
What is very important is that O(n log n) is complexity in terms of number of zeroes and ones, not the count of ones (which could be given as an array, like [1,5,9,15]). Checking if a set has an arithmetic progression, terms of number of 1's, is hard, and according to that paper as of 1999 no faster algorithm than O(n2) is known, and is conjectured that it doesn't exist. Everybody who doesn't take this into account is attempting to solve an open problem.
Other interesting info, mostly irrevelant:
Lower bound:
An easy lower bound is Cantor-like set (numbers 1..3^n-1 not containing 1 in their ternary expansion) - its density is n^(log_3 2) (circa 0.631). So any checking if the set isn't too large, and then checking all pairs is not enough to get O(n log n). You have to investigate the sequence smarter. A better lower bound is quoted here - it's n1-c/(log(n))^(1/2). This means Cantor set is not optimal.
Upper bound - my old algorithm:
It is known that for large n, a subset of {1,2,...,n} not containing arithmetic progression has at most n/(log n)^(1/20) elements. The paper On triples in arithmetic progression proves more: the set cannot contain more than n * 228 * (log log n / log n)1/2 elements. So you could check if that bound is achieved and if not, naively check pairs. This is O(n2 * log log n / log n) algorithm, faster than O(n2). Unfortunately "On triples..." is on Springer - but the first page is available, and Ben Green's exposition is available here, page 28, theorem 24.
By the way, the papers are from 1999 - the same year as the first one I mentioned, so that's probably why the first one doesn't mention that result.