Consistency of the estimator $T_n(X)=X_{(n)}$
Let $X_1,X_2,...X_n$ be the random sample from the population with Uniform distribution with PDF given by: $$f_{X}(x)=\left\{\begin{array}{cc} \frac{1}{\theta}, & 0 \leq x \leq \theta \\ 0 & \text{else} \end{array}\right\}$$
Now Let $Y=X_{(n)}=Max\left\{X_1.X_2,....X_n\right\}$ whose PDF is given by: $$f_{Y}(y)=\left\{\begin{array}{cc} \frac{n y^{n-1}}{\theta^{n-1}}, & 0<y \leq \theta \\ 0 & \text { else } \end{array}\right\}$$
Now By simple integration we get: $$E(X_{(n)})=\frac{n}{n+1}\theta$$
Now consider the estimator: $$T_n(X)=\frac{n+1}{n}X_{(n)}=\frac{n+1}{n}Y$$ It is now evident that it is unbiased.
I want to check Consistency of the above estimator. I started with as follows:
$$P\left(\left|T_{n}(x)-\theta\right| \leqslant \varepsilon\right)=P\left(\theta-\varepsilon \leq T_{n}(x) \leq \theta+\varepsilon\right)$$
$\implies$ $$P\left(\left|T_{n}(x)-\theta\right| \leqslant \varepsilon\right)=P\left(\frac{n(\theta-\varepsilon)}{n+1} \leq Y \leq \frac{n(\theta+\varepsilon)}{n+1}\right)$$
So we get: $$P\left(\left|T_{n}(x)-\theta\right| \leqslant \varepsilon\right)=\int_{\frac{n(\theta-\epsilon)}{n+1}}^{\frac{n(\theta+\epsilon)}{n+1}} f_{Y}(y) d y=\int_{\frac{n(\theta-\epsilon)}{n+1}}^{\frac{n(\theta+\epsilon)}{n+1}} \frac{ny^{n-1}}{\theta^n} d y$$
$\implies$ $$P\left(\left|T_{n}(x)-\theta\right| \leqslant \varepsilon\right)=\frac{\left(1+\frac{\varepsilon}{\theta}\right)^{n}-\left(1-\frac{\varepsilon}{\theta}\right)^{n}}{\left(1+\frac{1}{n}\right)^{n}}$$
Now i am stuck in proving the above limit as $n \to\infty$ as $1$.
Solution 1:
For a fixed $\epsilon > 0$, for $n > \theta/\epsilon$, $$\frac{n(\theta+\epsilon)}{n+1} > \theta.$$ So you cannot simply set the upper limit of integration as you have, because eventually, for any $\epsilon > 0$, the limit as $n \to \infty$ will make that limit of integration exceed $\theta$.
Recognizing this, we can see that the correct expression is $$\lim_{n \to \infty} \Pr[|T_n(x) - \theta| \le \epsilon] = \lim_{n \to \infty} \int_{y = \frac{n(\theta-\epsilon)}{n+1}}^\theta \frac{ny^{n-1}}{\theta^n} \, dy = \lim_{n \to \infty} 1 - \frac{(1-\epsilon/\theta)^n}{(1+1/n)^n} = 1.$$
If this seems to lack sufficient rigor, we can instead write
$$\Pr[|T_n(x) - \theta| \le \epsilon] = U(n;\theta,\epsilon)^n - L(n;\theta, \epsilon)^n,$$ where $$U(n;\theta,\epsilon) = \min_n \left( 1, \frac{n(1+\epsilon/\theta)}{n+1} \right), \quad L(n;\theta,\epsilon) = \max_n \left(0, \frac{n(1-\epsilon/\theta)}{n+1}\right).$$ But since the case $\epsilon > \theta$ is uninteresting, $L$ is without loss of generality $n(1-\epsilon/\theta)/(n+1)$; however, $U$ always becomes $1$ for sufficiently large $n$. Thus $$\lim_{n \to \infty} U(n;\theta,\epsilon)^n - L(n;\theta,\epsilon)^n = 1 - \lim_{n \to \infty} L(n;\theta,\epsilon)^n.$$
Solution 2:
Another approach:
Using Chebyshev's inequality
$$
P(|T_n(X)-\theta|\leq\varepsilon)\geq1-\frac{Var(T_n(X))}{\varepsilon^2}=1-\frac{n\theta^2}{(n+2)(n+1)^2\varepsilon^2}\to1(n\to\infty)
$$