Pick the closest number game
Solution 1:
The first part. Let the three players choose $x,y,z$ in order. Suppose that $x=0$ and consider the third player's choice.
Option A: Choose $z>y$. Then the optimal choice is arbitrarily close to $y$ to make the winning probability arbitrarily close to $1-y$. In this case, the winning probability for the second player is arbitrarily close to (and a little larger than) $y/2$.
Option B: Choose $z<y$. The winning probability is always $y/2$, so the choice is a random element in $(0,y)$. In this case, the winning probability for the second player is $(1-y)+y/2=(1-y)/2$. [The probability that the random element is $\geq y$) $+$ (the probability that it is nearer than $z$ to $y$ conditioned on it being less than $y$)$\times$(probability of random number being less than $y$)].
The third player will choose A if $1-y>y/2$, i.e. if $y<2/3$ and will choose B if $y \geq 2/3$.
Whether the third player is made to choose option A or B, we can see that the optimal choice of $y$ is $2/3$.
The second part. This is an intuitive rather than analytic solution. After the first two players have made their choice, the maximum probability that the third player can get is among $x,(y-x)/2,1-y$. Equating the three of them, we get $x=1/4,y=3/4$, so both $1/4$ and $3/4$ should be optimal for the first player.
Solution 2:
I thought about this some more and realized why the question is posed the way it is, in two steps. We can use the first case to solve the general case recursively.
As the existing answers have established, the answer to part $1$) is $2/3$, and the payoffs in this case are $(\frac16,\frac12,\frac13$).
In the general case, by playing at $x_1$ Player $1$ effectively creates two new games on either side of $x_1$, one scaled down by $x_1$ and one scaled down by $1-x_1$, and these subgames work like the game in part $1$), since they're both delimited by a play by Player $1$ on one end and a normal boundary on the other.
Again, without loss of generality assume $x_1\le\frac12$. Either the other two players play in the same subgame, or in different subgames. If they play in the same subgame, it must be the larger one, of size $1-x_1$, since Player $3$ wouldn't play in the smaller game if Player $2$ has already played there. From $1)$ we know that in this case the payoffs are $(1-x_1)(\frac16,\frac12,\frac13)$, plus $x$ for Player $1$ that she gets on the other side, for a total of $(5x_1+1)/6$ for Player $1$.
Player $3$ plays in the smaller subgame if Player $2$ has played in the larger one and the payoff $x_1-\epsilon$ in the smaller subgame is larger than the payoff $\frac13(1-x_1)$ in the larger subgame, and thus if $x_1\gt\frac14$. In this case, Player $2$ would play at $1-x_1+\delta$ to deter Player $3$ from playing at $x_2+\epsilon$, so the payoff for Player $1$ would be $\frac\epsilon2+\frac\delta2+\frac12((1-x_1)-x_1)=\frac\epsilon2+\frac\delta2+\frac12-x_1$.
Player $2$ plays in the smaller subgame if the payoff $x_1-\delta$ in the smaller subgame is larger than the payoff $\frac12(1-x_1)$ in the larger subgame, and thus if $x_1\gt\frac13$. Player $3$ would then play at $x_1+\epsilon$. This would leave Player $1$ with only $\frac\delta2+\frac\epsilon2$, so she'll avoid this outcome.
Since at $x_1=\frac14$ we already have $(5x_1+1)/6\gt\frac12-x_1$, Player $1$ plays at $x_1=\frac14$.
Solution 3:
Let $x_1$ denote the number picked by player $i$. Let $x_\lt$ and $x_\gt$ be the lesser and greater of $x_1$ and $x_2$. Also, I will use $\delta$ and $\epsilon$ to represent arbitrarily small displacements.
Player $3$ picks either $x_\lt-\epsilon$ or $x_\gt+\epsilon$ or any number (it doesn't matter which) in $(x_<,x_>)$. The payoffs for Player $3$ in these cases are $x_\lt-\epsilon$, $1-x_\gt-\epsilon$ and $(x_\gt-x_\lt)/2$, respectively, and she picks the greatest among these three. Note that in the third case, the half of the interval $[x_\lt,x_\gt]$ that Player $3$ doesn't win goes to Players $1$ and $2$ in equal parts (i.e. they each get one quarter of the interval), since in this case Player $3$ chooses uniformly randomly within that interval.
We can assume without loss of generality that $x_1\le\frac12$.
Now assume first that Player $2$ picks a number above $x_1$. Then we have to distinguish two cases.
For small $x_1$, it will not pay for Player $3$ to use her first option. Then there is a boundary for $x_2$ at which Player $3$ switches form her second to third option. This is the optimal move for Player $2$, since playing to either side of it would just cede territory. The condition for Player $3$ to be indifferent between these two options is $1-x_2=(x_2-x_1)/2$ and thus $x_2=(x_1+2)/3$. At this point $x_2-x_1=(2-2x_1)/3$. If Player $3$ goes for her second option, that interval is split evenly between Players $1$ and $2$, so the payoffs are $((2x_1+1)/3,(1-x_1)/3+\frac\epsilon2,(1-x_1)/3-\frac\epsilon2)$; whereas if Player $3$ goes for her third option, that interval is split in proportions $\frac14:\frac14:\frac24$ (see above), so the payoffs are $((5x_1+1)/6,(1-x_1)/2,(1-x_1)/3)$. At the equilibrium point, the $\epsilon$ difference favours Player $3$'s third option, where she doesn't lose $\frac\epsilon2$, and since this is favourable to Player $2$ (who gets $(1-x_1)/2$ instead of $(1-x_1/3)$), Player $2$ can play exactly at the equilibrium point and doesn't have to add a $\delta$ of his own to induce Player $3$ to choose her third option.
For larger $x_1$, it will become profitable for $x_3$ to switch to her first option. The point of indifference for this switch is $(1-x_1)/3=x_1$, or $x_1=\frac14$. For $x_1\gt\frac14$, Player $2$ plays as closely as he can to Player $1$ while still forcing Player $3$ to use her first option. The point of indifference for this is $x_2=1-x_1$, but Player $2$ has to play at $x_2=1-x_1+\delta$ to make sure that Player $3$ uses her first and not her second option. The payoffs are then $((1-2x_1)/2+\frac\delta2+\frac\epsilon2,\frac12-\frac\delta2,x_1-\frac\epsilon2)$.
We've found that by picking a number above $x_1$, Player $2$ gets $\frac12-\frac\delta2$ if $x_1\gt\frac14$ and $(1-x_1)/2\ge\frac38$ if $x_1\le\frac14$. So it will never pay for Player $2$ to play below $x_1$, and we've thus exhausted all cases.
To summarize: If Player $1$ plays at $x_1\le\frac14$, then Player $2$ plays at $x_2=(x_1+2)/3$, Player $3$ uses her third option, and the payoffs are $((5x_1+1)/6,(1-x_1)/2,(1-x_1)/3)$. If Player 1 plays at $x_1\gt\frac14$, then Player $2$ plays at $x_2=1-x_1+\delta$, Player $3$ uses her first option, and the payoffs are $((1-2x_1)/2+\frac\delta2+\frac\epsilon2,\frac12-\frac\delta2,x_1-\frac\epsilon2)$.
In the first case, the payoff for Player $1$ increases with $x_1$, and in the second case it decreases with $x_1$, so the maximum is around $\frac14$. In the first case, this yields a payoff of $\frac38$ for Player $1$, whereas in the second case the payoff is not more than $\frac14$. Thus Player $1$ picks $\frac14$, Player $2$ picks $\frac34$, and Player $3$ uses her third option, with expected payoffs $(\frac38,\frac38,\frac14)$.