You can go for calculating another integral:

$$\begin{aligned}\mathbb{E}\max\left(X_{1},\dots,X_{n}\right) & =\int_{0}^{\infty}P\left(\max\left(X_{1},\dots,X_{n}\right)>x\right)dx\\ & =\int_{0}^{\infty}1-P\left(\max\left(X_{1},\dots,X_{n}\right)\leq x\right)dx\\ & =\int_{0}^{\infty}1-\left(1-e^{-x}\right)^{n}dx\\ & =\int_{0}^{\infty}\sum_{k=1}^{n}\binom{n}{k}\left(-1\right)^{k-1}e^{-kx}dx\\ & =\sum_{k=1}^{n}\binom{n}{k}\left(-1\right)^{k-1}\int_{0}^{\infty}e^{-kx}dx\\ & =\sum_{k=1}^{n}\binom{n}{k}\left(-1\right)^{k-1}\left[-\frac{e^{-kx}}{k}\right]_{0}^{\infty}\\ & =\sum_{k=1}^{n}\binom{n}{k}\left(-1\right)^{k-1}\frac{1}{k} \end{aligned} $$

There might be a closed form for it, but I haven't found it yet.


Edit:

According to the comment of @RScrlli the outcome can be proved to equal harmonic number: $$H_n=\sum_{k=1}^n\frac1{k}$$

This makes me suspect that there is a way to find it as the expectation of:$$X_{(n)}=X_{(1)}+(X_{(2)}-X_{(1)})+\cdots+(X_{(n)}-X_{(n-1)})$$


a clever probabilistic approach is one that takes advantage of the homogenous parameter $\lambda_i =1$ for all, and the memorylessness of the exponential distribution (and the fact that there is zero probability for any $X_i = X_j$ for $i\neq j)$.

$(X_1, X_2, ...,X_n)$
we want $E\big[\max_i X_i\big]$

$\max_i X_i$ is equivalent to the final arrival in a poisson process with intensity $n$ where intensity drops by one after each arrival

i.e. with
first arrival in $(X_1, X_2, ...,X_n)$
this is equivalent to the merger of $n$ independent Poisson processes which results in a merged Poisson process with parameter $n$.

WLOG suppose $X_n$ is first arrival, then consider
first arrival in $(X_1, X_2, ...,X_{n-1})$ by memorylessness we have a fresh start with $n-1$ independent Poisson processes which is a merged process with parameter $n-1$

and continue on until WLOG we only want first arrival in $(X_1)$

so $\max_i X_i =\sum_{i=1}^n T_i$ where $T_i$ are the arrival times described above
$E\big[\max_i X_i\big] =\sum_{i=1}^n E\big[T_i\big] =\sum_{i=1}^n \frac{1}{n-i+1}= \sum_{i=1}^n\frac{1}{n}$

really you should always try to exploit memorylessness when dealing with exponential r.v.'s


Let $X_{(1)},X_{(2)},\ldots,X_{(n)}$ be the order statistics corresponding to $X_1,X_2,\ldots,X_n$.

Making the transformation $(X_{(1)},\ldots,X_{(n)})\mapsto (Y_1,\ldots,Y_n)$ where $Y_1=X_{(1)}$ and $Y_i=X_{(i)}-X_{(i-1)}$ for $i=2,3,\ldots,n$, we have $Y_i$ exponential with mean $1/(n-i+1)$ independently for all $i=1,\ldots,n$.

Therefore, $$R=X_{(n)}-X_{(1)}=\sum_{i=1}^n Y_i-Y_1=\sum_{i=2}^n Y_i$$

Hence, $$\mathbb E\left[R\right]=\sum_{i=2}^n \frac1{n-i+1}$$

And since $X_{(n)}=\sum\limits_{i=1}^n Y_i$, we also have $$\mathbb E\left[X_{(n)}\right]=\sum_{i=1}^n \mathbb E\left[Y_i\right]=\sum_{i=1}^n \frac1{n−i+1}=\sum_{i=1}^n \frac1{i}$$

Related threads:

  • Order statistics of $n$ i.i.d. exponential random variables
  • Order statistics of i.i.d. exponentially distributed sample

Alternatively, we can proceed to find the expectation of $X_{(1)}$ and $X_{(n)}$ separately as you did. Clearly $X_{(1)}$ is exponential with mean $1/n$. And the density of $X_{(n)}$ is

$$f_{X_{(n)}}(x)=ne^{-x}(1-e^{-x})^{n-1}\mathbf1_{x>0}$$

For a direct calculation of the mean of $X_{(n)}$, we have

\begin{align} \mathbb E\left[X_{(n)}\right]&=\int x f_{X_{(n)}}(x)\,dx \\&=n\int_0^\infty xe^{-x}(1-e^{-x})^{n-1}\,dx \\&=n\int_0^1(-\ln u)(1-u)^{n-1}\,du \tag{1} \\&=n\int_0^1 -\ln(1-t)t^{n-1}\,dt \tag{2} \\&=n\int_0^1 \sum_{j=1}^\infty \frac{t^j}{j}\cdot t^{n-1}\,dt \tag{3} \\&=n\sum_{j=1}^\infty \frac1j \int_0^1 t^{n+j-1}\,dt \tag{4} \\&=n\sum_{j=1}^\infty \frac1{j(n+j)} \\&=\sum_{j=1}^\infty \left(\frac1j-\frac1{n+j}\right) \\&=\sum_{j=1}^n \frac1j \end{align}

$(1)$: Substitute $e^{-r}=u$.

$(2)$: Substitute $t=1-u$.

$(3)$: Use Maclaurin series expansion of $\ln(1-t)$ which is valid since $t\in (0,1)$.

$(4)$: Interchange integral and sum using Fubini/Tonelli's theorem.

We can also find the density of $R$ through the change of variables $(X_{(1)},X_{(n)})\mapsto (R,X_{(1)})$ and find $\mathbb E\left[R\right]$ directly by basically the same calculation as above.