Unbiased estimator of a uniform distribution
Solution 1:
Another answer has already pointed out why your intuition is flawed, so let us do some computations.
If $X$ is uniform, then: $$ P(X_{max}<x)=P(X_i<x,\forall i)=\prod_i P(X_i<x)= \begin{cases} 1 & \text{if } x\ge \theta \\ \left(\frac{x}{\theta}\right)^n & \text{if } 0\le x\le \theta \\ 0 & \text{if } x\le 0 \end{cases} $$ so the density function of $X_{max}$ is: $$ f_{max}(x;\theta)=\begin{cases} \frac{n}{\theta^n}x^{n-1} & \text{if } 0\le x\le \theta \\ 0 & \text{otherwise} \end{cases} $$ Then we can compute the average of $X_{max}$: $$ E(X_{max})=\int_0^\theta x \frac{n}{\theta^n}x^{n-1} dx =\frac{n}{n+1} \theta $$ so $X_{max}$ is biased whereas $\frac{n+1}{n}X_{max}$ is an unbiased estimator of $\theta$.
Solution 2:
To show that the sample maximum $x_{max} = \max_{i=1}^n\{x_i\}$ is an unbiased estimator of $\theta$ you would need to show that $ E(x_{max}) = \theta.$ This is saying that the average value of the maximum of $n$ uniform variables on $[0,\theta]$ is $\theta.$
This cannot be right. $\theta$ is the maximum value that any of the uniform variables can have. $x_{max},$ the sample maximum might tend to be somewhat close to $\theta$, but it will always be less than or equal to it. (Actually, cause this is a continuous distribution, it will always be strictly less, but that's beside the point.) Thus the average value of the sample maximum will be somewhat less than $\theta$ and the sample maximum will be a biased estimator.
In order to compute the $k$ that gives you an unbiased estimator, you must demand $$E(\hat \theta) = kE(x_{max}) = \theta$$ so take $$ k = \frac{\theta}{E(x_{max})}$$
The remaining work is to calculate $E(x_{max})$ as a function of $\theta$ and $n$.
Solution 3:
Others have already provided excellent answers showing how $ X_{\mathrm{max}} = \mathrm{max}\{X_1,...,X_n\}$ is a biased estimator and $\frac{n+1}{n}X_{\mathrm{max}}$ is an unbiased estimator. But $\frac{n+1}{n}X_{\mathrm{max}}$ is not the only unbiased estimator.
Let $\hat{\theta} = 2\overline{X}_n$. Then $\hat{\theta}$ is also an unbiased estimator.
$$ \begin{align*} \mathrm{bias}(\hat{\theta}) &= \mathbb{E}(\hat{\theta}) - \theta\\ &= 2\mathbb{E}(\overline{X}_n) - \theta\\ &= \frac{2}{n}\sum_{i=1}^n \mathbb{E} (X_i) - \theta\\ &= \frac{2}{n}n\frac{\theta}{2} - \theta\\ &= 0 \end{align*} $$