What is the expected value of this game for large N?

Letting $E_N$ be the optimal expected value for even $N$, we can show that $$ E_N\sim c\sqrt N $$ for $N$ large, where $c\approx 0.369136$.

I will give a rough argument, which it should be possible to make rigorous by going through the details. Define a process $X^N_t$ for $0\le t\le 1$ as follows. Let $X^N_{k/N}$ be the number of black cards minus the number of red cards after $k$ have been selected. We interpolate in the range $[(k-1)/N,k/N)$ as constant. This is a random walk conditioned on $X_1=0$. Scaling, $N^{-1/2}X^N_t$ converges, in distribution, to a limit $X$ which is a standard Brownian bridge on the interval $[0,1]$ (i.e., a standard Brownian motion conditioned on $X_1=0$). So, commuting the limit with the supremum, $$ N^{-1/2}E_N=\sup_\tau E[N^{-1/2}X^N_\tau]\to\sup_\tau E[X_\tau]. $$ The supremum is taken over stopping times $0\le\tau\le 1$. So, $E_N\sim c\sqrt N$ where $c=\sup_\tau E[X_\tau]$ is the solution to the continuous-time version of the game described.

To solve for $c$, let $f(t,x)$ be the solution starting from $X=x$ at time $t$. That is, $$ f(t,x)=\sup_{\tau\ge t} E[X_\tau\vert X_t=x]. $$ By considering $\tau=t$ and $\tau=1$, we see that $f(t,x)\ge\max(x,0)$. We need to compute $c=f(0,0)$. We can reduce the dimensionality of the problem: by a simple symmetry, for each fixed time $t$, the process $Y_s=(1-t)^{-1/2}X_{t+s(1-t)}$ is also a Brownian motion conditioned on $Y_1=0$. So, $$ \begin{align} f(t,x)&=\sup_{\tau} E[X_{t+\tau(1-t)}\vert X_t=x]\\ &=\sup_\tau(1-t)^{1/2} E[Y_\tau\vert Y_0=(1-t)^{-1/2}x]\\ &=(1-t)^{1/2}g((1-t)^{-1/2}x) \end{align} $$ where I am setting $g(x)=f(0,x)$. Next, the Brownian bridge satisfies the Stochastic differential equation $$ dX_t = dW_t - (1-t)^{-1}X_tdt $$ for standard Brownian motion $W$. On the domain where $f(t,x) > x$ then $f(t,X_t)$ is locally a martingale, so it satisfies the Kolmogorov backward equation (PDE), $$ \frac{\partial}{\partial t}f+\frac12\frac{\partial^2}{\partial x^2}f-(1-t)^{-1}x\frac{\partial}{\partial x}f=0. $$ Substituting in the expression for $f$ in terms of $g$ gives a linear ODE $$ g^{\prime\prime}(y)-yg^{\prime}(y)-g(y)=0. $$ This has the general solution $$ g(y)=e^{y^2/2}(a+bN(y)) $$ for constants $a,b$, where $N(y)=(2\pi)^{-1/2}\int_{-\infty}^ye^{-t^2/2}dt$ is the cumulative normal distribution function. As $g$ is increasing, it cannot blow up in the limit $y\to-\infty$, so we have $g(y)=be^{y^2/2}N(y)$ on the domain $g(y) > y$. Plugging in $y=0$ gives $c=g(0)=b/2$, so $g(y)=2ce^{y^2/2}N(y)$ over $g(y) > y$. As $g(y)$ cannot blow up faster than linearly in $y$, there must be a $y_0 > 0$ for which $2ce^{y_0^2/2}N(y_0)=y_0$, so that $g(y)=y$ for $y > y_0$. The optimal value of $c$ for which this holds is given by $$ c=\sup_{y > 0}\frac{ye^{-y^2/2}}{2N(y)}. $$ By differentiating, the function of $y$ on the right has a unique local maximum. I am not sure if there is an analytic solution, but I tried in wolfram alpha. Expressing in terms of the error function, $$ c=\sup_{y > 0}\frac{ye^{-y^2/2}}{1+{\rm erf}(y/\sqrt2)} $$ we get $c\approx0.369136$.


For comparison, I also computed the $\alpha$ in the linked paper as $\alpha\approx0.839923675692373$, so $c=(1-\alpha^2)(\pi/2)^{1/2}\approx 0.369136$ and we are in agreement.