How to minimise the cost of guessing a number in a high/low guess game?
In a high/low guess game, the "true" number is one of $\{1,\cdots,1000\}$. You'll be told if your guess is $<,>$ or $=$ the true number for each guess you make, and the game terminates when you guess out the true number. Suppose the following three scenarios:
If your guess is either lower or higher, you pay $\$1$ in both cases. If your guess is correct, you pay nothing and the game ends.
If your guess is higher, you pay $\$1$; if your guess is lower you pay $\$2$. If your guess is correct, you pay nothing and the game ends.
If your guess is higher, you pay $\$1$; if your guess is lower you pay $\$1.5$. If your guess is correct, you pay nothing and the game ends.
In these three scenarios, what is, respectively, the minimum number of $\$$ you must have to make sure you can find the true number?
Formally, define a space of all guess strategies $\Sigma$. We can then identify the cost $C$ as a function $$C:\Sigma\times \{1,\cdots,1000\}\to\Bbb N_+,\quad v\mapsto C(S,v)$$ that maps the pair $(S,v)$ (where $v$ is the "true" number) to the cost it's going to take under this arrangement. Our problem is then to find $$\min_{S} \max_{v} C(S,v).$$
Can this min-max problem be analytically solved? What would be the corresponding optimal strategy then?
For scenario 1, my intuition is to use bisection search, which, at worst, costs $\$10$ ($2^{10}=1024$). But I cannot prove this is indeed optimal. For the second scenario, since making an under-guess means a higher cost, maybe it's more optimal to use a "right-skewed" bisection, i.e. you keep making guesses at the $2/3$-quantile point. But this is just (rather wild) intuition, not anything close to a proof.
For this game, the most natural representation of a strategy is a rooted binary tree, with vertices labeled by guesses (i.e. numbers from $\{1, \ldots, N=1000\}$), and each vertex having at most two edges (connecting it to the guess to make next, for "lower" and "higher" cases).
Let the penalties you pay for "lower/higher" cases be $L$ and $H$, respectively. Now consider an arbitrary $N$, and consider an optimal tree $T(N)$ (that reaches the minimum worst-case total penalty). Its root is labeled with some guess $R$, $1\leqslant R\leqslant N$, its left subtree can be chosen to be [isomorphic to] $T(R-1)$, and its right subtree isomorphic to $T(N-R)$ (assuming $T(0)$ is empty).
So if $P(N,L,H)$ is the optimal penalty, then $$P(0,L,H)=P(1,L,H)=0,\\ P(N,L,H)=\min_{1\leqslant R\leqslant N}\max\begin{cases}L+P(N-R,L,H)\\ H+P(R-1,L,H)\end{cases}\quad(N>1)$$ which allows one to compute $$P(1000,1,1)=\color{red}{9},\quad P(1000,2,1)=14,\quad P(1000,1.5,1)=11.5$$ (note $9$ but not $10$ as you've suggested; the decision process is not just a bisection but has extra "$=$" outcomes). The trees can be computed along the way.
Regarding the asymptotic behavior (with $N\to\infty$), if we let $R(N,L,H)$ be the "argmin" (say, the smallest root of optimal trees), one can show that the limits $$\alpha=\alpha(L,H)=\lim_{N\to\infty}\frac{P(N,L,H)}{\ln N},\quad\beta=\beta(L,H)=\lim_{N\to\infty}\frac{R(N,L,H)}{N}$$ exist and satisfy $$H+\alpha\ln\beta=L+\alpha\ln(1-\beta)=0.$$ In particular, $\beta(2,1)=(\sqrt{5}-1)/2$, and $\beta(1.5,1)$ is the root of $\beta^3=(1-\beta)^2$.
For the (2,1) case, I found the worst-case cost increases by 1 at every Fibonacci number, so $P(F_n,2,1)=1+P(F_n-1,2,1)=1+P(F_{n-1},2,1)$. These key values $F_n$ increase by ratio $F_n/F_{n-1}\to\phi=(1+\sqrt{5})/2$.
They grow exponentially; conversely, the maximum cost for $P(N,2,1)$ grows logarithmically.
For (1.5,1) the worst-case cost increases by $0.5$ at these values: $$P(N,2,1)>P(N-1,1.5,1)\,if\, N=2,3,4,5,7,9,12,16,21,28,37,49,65,86,114,\ldots$$
The recursion in this sequence is $a_{n+1}=a_{n-1}+a_{n-2}$. Their ratio, $a_{n+1}/a_n$, approaches 1.3246, which is the root of $P^3=P+1$, just as $\phi$ for the (2,1) case is the root of $P^2=P+1$.