Why does a beta distribution between two uniform order statistics have the distribution it does?
Per the article on order statistics, the $k$-th order statistic of a uniform distribution ($U_{(k)}$) is Beta distributed with parameters $k$ and $n-k+1$. And the distribution of $U_{(k)}-U_{(j)}$ is also Beta with parameters $k-j$ and $n-(k-j)+1$.
In the post here, I managed to prove the first fact by thinking of a coin that lands heads whenever a uniform falls on the left of the $k$-th order stat and tails when it lands to the right of it. The position of the $k$-th order statistic is then like the probability of heads for this coin ($p$). So its like we've seen $k-1$ heads in $n-1$ tosses. This makes it a Beta with parameters $a=k$ and $b=n-k+1$ as expected (since a Beta with parameters $a$ and $b$ is the distribution of the $p$ when we've seen $a-1$ heads and $b-1$ tails).
So far so good, but this doesn't seem to work for the second observation (distribution of $U_{(k)}-U_{(j)}$). Instead of taking the left and right of the $k$-th order statistic, we can think of the interval between $U_{(k)}$ and $U_{(j)}$. If a random uniform falls inside this interval, we consider it a heads and tails if it falls outside. We know that $(k-j-1)$ uniforms fell inside the interval (heads) among $n-2$ total instances. So the number of tails is $n-(k-j)-1$. So, the Beta distribution should have parameters $k-j$ and $n-(k-j)$. But the Wikipedia article mentions the parameters $k-j$ and $n-(k-j)+1$. So, there is an off by one error here. Where did I go wrong in the second case?
In the first case, the "prior" distribution being updated is $\text{Uniform}(0,1)=\text{Beta}(1,1)$ for the location of a single point chosen uniformly at random in $(0,1)$. But in the second case, the "missing $+1$" arises from the "prior" now for the distance between two such points chosen independently, which is $\text{Beta}(1,2)$ (see footnote$^\dagger$).
First case. Given a single uniform r.v. $U$, $n-1$ further i.i.d. uniforms $X_1,...,X_{n-1}$ are placed, forming $n-1$ Bernoulli trials, success meaning that $X_i<U,$ and $Y$ is the number of successes: $$\begin{align}&U_{(k)}\sim(U\mid Y=k-1)\\[1ex] &\quad\quad\quad\quad \text{ where }\begin{cases}U\sim\text{Uniform}(0,1)=\text{Beta}(1,1)\\ (Y\mid U=u)\sim\text{Binomial}(n-1,u)\end{cases}\\[3ex] &\implies U_{(k)}\sim\text{Beta}(1+(k-1),1+(n-1)-(k-1))\\ &\quad\quad\quad\quad\quad\quad=\text{Beta}(k,n-k+1)\end{align}$$
Second case. Given two i.i.d. uniforms $U,U'$, $n-2$ further i.i.d. uniforms $X_1,...,X_{n-2}$ are placed, forming $n-2$ Bernoulli trials, success meaning that $\min(U,U')\le X_i\le \max(U,U'),$ and $Y$ is the number of successes: $$\begin{align}&U_{(k)}-U_{(j)}\sim(|U-U'|\,\mid Y=k-j-1)\\[1ex] &\quad\quad\quad\quad\quad\quad\quad \text{ where }\begin{cases}\text{i.i.d. }U,U'\sim\text{Uniform}(0,1),\\ \quad \implies |U-U'|\sim\text{Beta}(1,2)\\ (Y\mid |U-U'|=d)\sim\text{Binomial}(n-2,d)\end{cases}\\[3ex] &\implies U_{(k)}-U_{(j)}\sim\text{Beta}(1+(k-j-1),2+(n-2)-(k-j-1))\\ &\quad\quad\quad\quad\quad\quad\quad\quad=\text{Beta}(k-j,n-k+j+1)\end{align}$$
In both cases, the conclusion is a consequence of the following fact relating the (conditional) beta and binomial distributions:
$$\left. \begin{array}{l} X\sim\text{Beta}(a,b)\\ (Y\mid X=x)\sim\text{Binomial}(n,x) \end{array} \right\} \implies (X\mid Y=y)\sim\text{Beta}(a+y,b+n-y)$$
$^\dagger$ Note that $\text{Beta}(1,2)$ is a right-triangular distribution with p.d.f. $2(1-t)$ and c.d.f. $1-(1-t)^2$ for $0<t<1.$ That this is the distribution of $|U-U'|$ for $\text{i.i.d. } U,U'\sim\text{Uniform}(0,1)$ has an easy geometrical proof.