Distribution of $\max(X_i)\mid\min(X_i)$ when $X_i$ are i.i.d uniform random variables

If I have $n$ independent, identically distributed uniform $(a,b)$ random variables, why is this true: $$ \max(x_i) \mid \min(x_i) \sim \mathrm{Uniform}(\min(x_i),b) $$

I agree that the probability density function of $\max(x_i) \mid \min(x_i)$ must be non-zero only in the range $[\min(x_i), b]$.

I also find it reasonable that the posterior distribution is uniform, but I cannot see why it has to be so.

Also, I am confused how to derive this algebraically. Let $X$ be the max of $n$ iid uniform random variables. Let $Y$ be the min of $n$ iid uniform random variables. If I do $$ f_{X|Y}(x|y) = \frac{f_{X,Y}(x,y)}{f_Y(y)}, $$ $f_Y(y)$ is known. But the joint distribution is unknown ($X$ and $Y$ are not independent).

Thanks.


Solution 1:

Your claimed result is not true, which probably explains why you're having trouble seeing it.

For simplicity I'll let $a = 0, b = 1$. Results for general $a$ and $b$ can be obtained by a linear transformation.

Let $X_1, \ldots, X_n$ be independent uniform $(0,1)$; let $Y$ be their minimum and let $X$ be their maximum. Then the probability that $X \in [x, x+\delta x]$ and $Y \in [y, y+\delta y]$, for some small $\delta x$ and $\delta y$, is

$$ n(n-1) (\delta x) (\delta y) (x-y)^{n-2} $$

since we have to choose which of $X_1, \ldots, X_n$ is the smallest and which is the largest; then we need the minimum and maximum to fall in the correct intervals; then finally we need everything else to fall in the interval of size $x-y$ in between. The joint density is therefore $f_{X,Y}(x,y) = n(n-1) (x-y)^{n-2}$.

Then the density of $Y$ can be obtained by integrating. Alternatively, $P(Y \ge y) = (1-y)^n$ and so $f_Y(y) = n(1-y)^{n-1}$.

The conditional density you seek is then $$ f_{X|Y}(x|y) = {n(n-1) (x-y)^{n-2} \over n(1-y)^{n-1}} == {(n-1) (x-y)^{n-2} \over (1-y)^{n-1}}. $$ where of course we restrict to $x > y$.

For a numerical example, let $n = 5, y = 2/3$. Then we get $f_{X|Y}(x/y) = 4 (x-2/3)^3 / (1/3)^4 = 324 (x-2/3)^3$ on $2/3 \le x \le 1$. This is larger near $1$ than near $2/3$, which makes sense -- it's hard to squeeze a lot of points in a small interval!

The result you quote holds only when $n = 2$ -- if I have two IID uniform(0,1) random variables, then conditional on a choice of the minimum, the maximum is uniform on the interval between the minimum and 1. This is because we don't have to worry about fitting points between the minimum and the maximum, because there are $n - 2 = 0$ of them.

Solution 2:

Hmm, I wonder if this question is from one of my statistics students, since I just used this in last night's lecture.

As the previous respondent pointed out, what you say is false. What is true is that, given $Y =$ min $X_i$, max $X_i$ has as its conditional distribution that of the maximum of $n - 1$ independent $U(Y,b)$ random variables.

Here is an applied probability approach: Given $Y$, we know all the $X_i \ge Y$ and $X_i = Y$ for one $i$. This leaves $n-1$ random variables that each have the conditional distribution of a $U(a,b)$ restricted to $[Y,b]$.

If you really want to do this algebraically, let $Z =$ max $X_i$, the joint density is the mixed second partial of $P${$Y \le y, Z \le z$}$ = P${$Z \le z$}$ - (1 -P${$Y > y$}$)$.