The "find my car" problem: proper interpretation and solution?
Solution 1:
Any valid search scheme is characterized by an increasing sequence of time intervals $\tau_1<\tau_2<\tau_3<\cdots$ such that for time $\tau_1$ you walk left, then for time $\tau_2$ you walk right, then for time $\tau_3$ you walk left again, and so on. Then your turning points are $$\begin{align} t_1&=\tau_1 & x_1&=-\tau_1 \\ t_2&=\tau_1+\tau_2 & x_2&=-\tau_1+\tau_2\\ t_3&=\tau_1+\tau_2+\tau_3 & x_3&=-\tau_1+\tau_2-\tau_3\\ &\vdots & &\vdots \\ t_n&=\sum_{i=1}^n\tau_i & x_n&=\sum_{i=1}^n(-1)^nt_i \end{align}$$ and we have $$f(t)=x_n+(-1)^{n+1}(t-t_n)\qquad\text{when $t_n\le t\le t_{n+1}$}.$$ Conversely, the first time the point at location $x$ is passed is $$g(x)=t_n+(-1)^{n+1}(x-x_n)\qquad\text{when $x_n\le x\le x_{n+1}$ or $x_n\ge x\ge x_{n+1}$}.$$
Let $\underline f(t)=\min_{s\le t}f(s)$ and $\overline f(t)=\max_{s\le t}f(s)$. To maximize $S_f(t)$ for a fixed $t$, it only makes sense to turn around at most once, because any $[\underline f(t),\overline f(t)]$ achieved by turning around multiple times can be achieved in less time by turning around only once. So we may take $\tau_1\le t/2$ (because turning around buys you nothing if $\tau_1>t/2$), and $\tau_{n>1}=\infty$. Then we have $\underline f(t)=-\tau_1$ and $\overline f(t)=t-2\tau_1$, and the task is simply to maximize the probability mass lying in the interval $[-\tau_1,t-2\tau_1]$. For the standard normal distribution, this gives $$S_f(t) = \frac12\operatorname{erf}\left(\frac{\tau_1}{\sqrt2}\right)+\frac12\operatorname{erf}\left(\frac{t-2\tau_1}{\sqrt2}\right)$$ which is maximized when $$\tau_1=\max\left(\frac23t - \frac13\sqrt{t^2+6 \log 2},0\right).$$ Asymptotically, as $t\to\infty$ we get $\tau_1\to t/3$: you should walk in one direction for time $t/3$, then turn around and walk the other way for the rest of the time, so that you cover a symmetrical range. On the other hand, for $t\le\sqrt{2\log 2}$ we have $\tau_1=0$: turning around is not worth the wasted time.
Come to think of it, this should also solve question (3), because a solution that maximizes $p$ subject to $t\le t_\max$ also minimizes $t$ subject to $p\ge p^*$, where $p^*$ is the optimum value of the former problem. So you just have to substitute the optimum $\tau_1$ into the expression for $S_f(t)$ to get $S^*(t)$, the highest possible probability of finding the car within time $t$. And then solve $S^*(t)=p$. It doesn't seem to yield an elementary solution, though.
Question (2) is a fair bit trickier. Here is a space-time diagram of the situation, with $x$ on the horizontal axis and $t$ on the vertical:
The thin line is $x=f(t)$, and the thick line is $t=g(x)$. We seek to minimize $T_f = \mathbb E[g]$, the expected value of the vertical coordinate of $g$ when the horizontal coordinate is normally distributed. I will not attempt to write down an explicit expression for this quantity. Instead, let us consider its variations under infinitesimal changes of $\tau_n$. At the optimum, any such variation should be zero.
Suppose you increase both $\tau_n$ and $\tau_{n+1}$ by the same amount $\delta t$. For a small $\delta t$-length interval of points near $x_n$, this decreases $g(x)$ from about $t_n+2\tau_{n+1}$ to about $t_n$. However, for all the points visited afterwards, $g(x)$ increases by $2\delta t$. These are shown in blue and red below for $n=2$.
Let $\phi(x)$ denote the probability density at $x$, and $\Phi(a,b)$ the probability mass lying between $a$ and $b$. The probability of the blue region is $\phi(x_n)\delta t$, while that of the red is $1 - \Phi(x_{n-1}, x_n)$. Therefore, the change in $\mathbb E[g]$ is $$\phi(x_n)\delta t\cdot(-2\tau_{n+1}) + \big(1-\Phi(x_{n-1},x_n)\big)\cdot2\delta t.$$ Looking at the diagram again, what we have just computed is $\delta t$ times the partial derivative of $\mathbb E[g]$ with respect to $x_n$, holding the rest of the $x_m$'s constant. At the optimum all the first derivatives should be zero, so $$\phi(x_n)\tau_{n+1} = 1 - \Phi(x_{n-1},x_n)$$ for all $n\ge 1$. It appears that the natural parametrization of the problem is in terms of $x_n$, so we substitute $\tau_{n+1}=|x_{n+1}-x_n|$ and get a nice sequence of equations each involving only three variables: $$\begin{align} |x_2-x_1|\phi(x_1) &= 1 - \Phi(x_0,x_1), \\ |x_3-x_2|\phi(x_2) &= 1 - \Phi(x_1,x_2), \\ |x_4-x_3|\phi(x_3) &= 1 - \Phi(x_2,x_3), \\ &\vdots \end{align}$$
Now all we have to do is solve this infinite system of nonlinear equations. But I don't know how to do that.
Well, here's an idea: take the first $n$ equations, eliminate $x_{n+1}$ by assuming $|x_{n+1}|-|x_n|=|x_n|-|x_{n-1}|$, and solve the system numerically.
Seems to work pretty well. For $n=8$, we get $$\begin{align} x_1&=-1.44085,&x_2&=2.62758,&x_3&=-3.6322,&x_4&=4.52034,\\ x_5&=-5.32668,&x_6&=6.07158,&x_7&=-6.76811,&x_8&=7.42555. \end{align}$$
I'm not sure this is really a rigorous technique, and I certainly haven't proved convergence or anything like that. But this is good enough for me.