Here is another way of looking at the problem that uses the natural symmetry.

The samples of size $k$ are in one-to-one correspondence with the sequence of "gaps" between the observations (via the sequence of order statistics):

$$\{x_1,x_2,\dots, x_k\}\leftrightarrow x_{(1)}<x_{(2)}<\cdots <x_{(k)} \leftrightarrow (G_1,G_2,\dots, G_{k+1})$$

Here we define $G_1=x_{(1)}$, $G_j=x_{(j)}-x_{(j-1)}$ for $2\leq j\leq k$, and $G_{k+1}=(N+1)-x_{(k)}$. This correspondence shows that $G_1,G_2,\dots, G_{k+1}$ are exchangeable random variables, in particular, they are identically distributed.

Since $\sum_{j=1}^{k+1} G_j=N+1$ identically, the expectations of these $k+1$ identically distributed random variables must be $\mathbb{E}(G_j)={N+1\over k+1}$.

In particular, $\mathbb{E}(G_{k+1})=\mathbb{E}((N+1)-x_{(k)})={N+1\over k+1}$ which implies $$\mathbb{E}(\mbox{Maximum})=\mathbb{E}(x_{(k)})=(N+1)- {N+1\over k+1}={k\over k+1} (N+1).$$

Similarly, we see that the expected values of the order statistics $\mathbb{E}(x_{(1)}),\mathbb{E}(x_{(2)}),\dots, \mathbb{E}(x_{(k)})$ are evenly spread out from ${1\over k+1} (N+1)$ to ${k\over k+1} (N+1).$

See this post for more questions about these gaps.


Call $B_k^N=\sum\limits_{m=k}^N\binom{m-1}{k-1}$.


Fact 1: $B_k^N=\binom{N}{k}$ (because the sum of the masses of a discrete probability measure is $1$ or by a direct computation).


Fact 2: For every $n\geqslant i\geqslant 1$, $n\binom{n-1}{i-1}=i\binom{n}{i}$.


Now to the proof.

Fact 2 for $(n,i)=(m,k)$ gives $\sum\limits_{m=k}^Nm\binom{m-1}{k-1}=\sum\limits_{m=k}^Nk\binom{m}{k}=k\sum\limits_{m=k+1}^{N+1}\binom{m-1}{(k+1)-1}=kB_{k+1}^{N+1}$.

Fact 1 gives $B_{k+1}^{N+1}=\binom{N+1}{k+1}$.

Fact 2 for $(n,i)=(N+1,k+1)$ (or a direct computation) gives $(k+1)B_{k+1}^{N+1}=(N+1)B_k^N$.

Finally, $\mu=\dfrac{kB_{k+1}^{N+1}}{B_k^N}=k\dfrac{N+1}{k+1}$.


Edit The same method yields, for every $i\geqslant0$, $$ \mathrm E(X(X+1)\cdots(X+i))=\frac{k}{k+i+1}(N+1)(N+2)\cdots(N+i+1). $$


Alternatively, $$\mu = \sum_{m=k}^N m \frac{\binom{m-1}{k-1}}{\binom{N}{k}} = \sum_{m=k}^{N}\frac{m! k!(N-k)!}{N!(k-1)!(m-k)!}$$ $$= k\frac{k! (N-k)!}{N!} \sum_{m=k}^N \binom{m}{m-k} = k\frac{k! (N-k)!}{N!} \binom{N+1}{N-k}$$ $$= k\frac{k! (N-k)! (N+1)}{N! (N-k)! (k+1)!} = k\frac{N+1}{k+1}$$

where $$\sum_{m=k}^N \binom{m}{m-k} = \sum_{i=0}^{N-k} \binom{k+i}{i} = \binom{N+1}{N-k}$$ can be seen from Pascal's Triangle.


We know that $$\mu = \sum_{m=k}^N m \frac{\binom{m-1}{k-1}}{\binom{N}{k}}$$ $$= \sum_{m=k}^{N}m \frac{(m-1)! k!(N-k)!}{N!(k-1)!(m-k)!}$$

$$= \frac{k!(N-k)!}{N!(k-1)!} \sum_{m=k}^{N} m \frac{(m-1)!}{(m-k)!}$$

$$ = \frac{k (N-k)!}{N!} \sum_{m=k}^{N} \frac{m!}{(m-k)!}$$

The expression on the right is a sum of falling factorials.