Searching for a point on the real line

This is a partial answer. I've found a recurrence relation to calculate future moves from past moves, but am not yet able to find the optimal length of the first move. I am able to say that the first move has a distance of no more than $\frac{1}{pdf(0)} = 2.50663\sigma$ before switching direction.


We can represent the movement sequence using a sequence of numbers like $[1,0.5,2,4,9,...]$ which lists the alternating left and right "switchback" points visited in the search. This example sequence represents moving from the initial $x=0$ to $x=1$, to $x=-0.5$, to $x=2$, to $x=-4$ etc. For convenience, I'll refer to this sequence as $s[n]$. I'm leaving the sequence unsigned for convenience. Every-other-item must form a non-decreasing sequence.

The core idea of my approach is to realize that the expected time of the search is just a function of this infinite list. One of the criteria for finding the minimum of this function is that its derivative be $0$ with respect to any of the values in this list. So the question becomes: how can we find the derivative of expected time with respect to the movement pattern?

Note: To be thorough, we can rule out that, in an optimal list, any given item in the list cannot be its minimum possible value (since this results in extra walking without covering new territory) nor can any item be infinite (because there's two sides of the real line to cover, so you must never stop alternating sides). Furthermore, this approach could reveal multiple sequences which fit the constraint, but only one of which is actually the global minimum. Luckily we later only find one sequence.

From this move sequence, we can see that the probability distribution is covered in "blocks" (periods of time where new ground is covered) interwoven with "gaps" (in which we are covering duplicate ground). The length of a given gap is determined by the sum of the furthest right we've been plus the furthest left we've been. Each block has a certain "starting time" which is the sum of all previous block and gaps.

Example: for the sequence $[1,2,3,4,...]$ we have:

  • the block $[0,1]$ at time 0 of length 1
  • the gap $[1,0]$ at time 1 of length 1
  • the block $[0,-2]$ at time 2 of length 2
  • the gap $[-2,1]$ at time 4 of length 3
  • the block $[1,3]$ at time 7 of length 2
  • the gap $[3,-2]$ at time 9 of length 5
  • etc

As you can see, the gap length after the block given by $s[n]$ is equal to $s[n] + s[n-1]$ Also, the gap length after a given block is the block length plus the prior gap length. (By keeping the sequence unsigned, we can leave out a bunch of absolute value signs.)

Let's say that we want to take the derivative with respect to $s[n]$. What we can do is see how much increasing the value of $s[n]$ by $\Delta x$ changes the expected time:

  • this has the effect of increasing the length of the current block by $\Delta x$ and decreasing the length of the (n+2)th block by $\Delta x$. All other block lengths are unaffected.
  • The length of the next two gaps are increased by $\Delta x$ each, but no other gap lengths are changed

Key insight: If you have a block, and you increase its starting time by a certain amount, without changing the "range" covered by the block, then the average expected time of the sequence increases by (amount shifted)*(probability mass of the block). You are taking some fraction of possible object locations and pushing that fraction to be later in time by a constant amount.

The below image illustrates what happens when you increase the value of one number in the sequence by $\Delta x$:

enter image description here

  • you have a small sliver with width $\Delta x$ with a probability mass of $pdf(s[n]) * \Delta x$, where $pdf(s[n])$ is the value of the PDF at that location. Importantly, since the normal distribution is symmetrical, the fact that our sequence is unsigned doesn't affect $pdf(s[n])$. This block moves earlier in time by a distance equal to the length of the first gap, the next block, and the second gap. This is the same as 2*(second gap), which is the same as $2*(s[n+1]+s[n])$. The overall benefit is thus $$pdf(s[n])*\Delta x*2*(s[n+1]+s[n])$$

  • All the remaining blocks (which includes all remaining probability mass not already covered by the search) is pushed back in time by an amount equal to $2*\Delta x$. The overall penalty is thus $$2*\Delta x*rem$$ where $rem$ is the remaining probability mass

Taken all together, this means that our overall derivative is

$$\lim\limits_{\Delta x \to 0} \frac{penalty - benefit}{\Delta x}$$

$$ = \lim\limits_{\Delta x \to 0} \frac{2*\Delta x*rem - pdf(s[n])*2*\Delta x*(s[n+1]+s[n])}{\Delta x}$$

$$= \lim\limits_{\Delta x \to 0} 2*rem*\Delta x - pdf(s[n])*2*(s[n+1]+s[n])$$

$$= 2*rem - 2*pdf(s[n])*(s[n+1]+s[n])$$

Applying the restriction that this equals zero:

$$0 = 2*rem - pdf(s[n])*2*(s[n+1]+s[n])$$

$$0 = rem - pdf(s[n])*(s[n+1]+s[n])$$

Then we can solve this recurrence relation for $s[n+1]$:

$$s[n+1] = \frac{rem}{pdf(s[n])} - s[n]$$

The tricky part is in figuring out what the remaining probability mass is, from the sequence. Since the distribution is symmetrical, we have

$$rem = 1 - cdf(s[n]) + cdf(-s[n-1])$$

Now it's time to start plugging in the actual formulas for the the normal distribution.

$$pdf(s[n]) = \frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{s[n]^2}{2\sigma^2}}$$

$$cdf(s[n]) = \frac{1}{2}\left[1+Erf\left(\frac{s[n]}{\sigma\sqrt{2}}\right)\right]$$

Thanks to Mathematica, we can substitute and simplify this all. I'm not sure if this is useful, but here's the expression when $\sigma = \frac{1}{\sqrt{2}}$, because this choice of $\sigma$ really helps simplify a bunch of fractions.

$$s[n+1] = e^{s[n]^2}\sqrt{\pi}\left[1-\frac{1}{2}\left(Erf(s[n-1]) + Erf(s[n])\right)\right] - s[n]$$

(Thus, to get the sequence in terms of $\sigma$, multiply all terms by $\sqrt{2}$. I hope this is not too confusing.)

Now we have our recurrence relation. The next question is what do we start it with? One key insight here is that the sequence $[a,b,c,\dots]$ is the same as $[0,0,a,b,c,\dots]$ because the addition of zero-length right and left moves at the start has zero effect on the actual movement pattern. Both sequences must be minima.

If the derivative with respect to the second 0 were negative, then you could save expected time by increasing that value, effectively inserting a new move (to the left) before the current initial move (to the the right) which contradicts the assertion that starting with move $a$ was optimal. But there's not the same restriction saying the derivative cannot be positive. So technically we only have the restriction

$$0 \le rem - pdf(s[n])*(s[n+1]+s[n])$$ $$0 \le 1 - pdf(0)*a$$ $$a \le \frac{1}{pdf(0)}$$

Let's view the entire sequence as defined by the choice for $a$, where $a$ must be less than or equal to $\frac{1}{pdf(0)}$. If we view the expected time as a function of $a$ then either the expected time is minimized when $a = \frac{1}{pdf(0)}$ (with an equivalent minimum achieved when $a=0$), or there is a minimum located somewhere within the possible interval for $a$.

Where this minimum is, is something that I haven't been able to find analytically for the normal distribution.


Numerical simulations

Here's some code:

def calcdist(tg):
    ls = [1.77245, -4.71672, 6476.76, -1000000]
    dist = 0
    cur = 0
    for pa in ls:
        if min(cur,pa) <= tg and tg <= max(cur,pa):
            return dist+abs(tg-cur)
        dist += abs(pa-cur)
        cur = pa
results = [calcdist(np.random.normal()) for i in range(0,10000000)]
samplemean = np.mean(results)
samplestderr = 1.96*np.sqrt(np.var(results))/np.sqrt(10000000)

If you decided to let $a = 1/pdf(0)$ then you get the sequence $[2.50663\sigma, 26.8494\sigma,5.3*10^{154}\sigma,...]$ which has an expected value of $3.663\sigma \pm 0.003\sigma$. If you increase or decrease the first term in the sequence independently of the rest, the expected time goes up.

If you let $a = \sqrt{\pi}*\sigma$ then you get the sequence $[1.77245\sigma, 4.71672\sigma, 6476.76\sigma, ...]$ which seems to have a lower expected value at $3.09\sigma \pm 0.01\sigma$. I just picked this because it was a pretty number.