Showing that asymptotic normality implies consistency.

In a statistics book i'm reading, it is postulated that asymptotic normality of an estimator implies consistency. That is $$ \hat{\theta}_n \stackrel{as}{\sim} \mathcal{N}(\theta_0, \frac{1}{n}\sigma(\theta_0)) \Rightarrow P_{\theta_0}(|\hat{\theta}_n - \theta_0|>\epsilon ) \to 0 $$ when $n\to\infty$, for all $\theta_0 \in \Theta$ and $\epsilon>0$.

I am trying to prove this, but i can't seem to get a breakthrough.

If anyone could shed some light on how this is proven, i would be very grateful.


Solution 1:

If the distribution of $X_n$ is $\mathcal N(x,\sigma_n^2)$ and $\sigma_n^2\to0$, then $X_n\to x$ in probability, that is, $P[|X_n-x|\geqslant\epsilon]\to0$ for every positive $\epsilon$.

To prove this, an easy route is to use Bienaymé-Chebyhev inequality, to the effect that $E[X_n]=x$ and $\mathrm{var}(X_n)=\sigma_n^2$ hence $$ P[|X_n-x|\geqslant\epsilon]=P[|X_n-E[X_n]|\geqslant\epsilon]\leqslant\frac{\mathrm{var}(X_n)}{\epsilon^2}=\frac{\sigma_n^2}{\epsilon^2}. $$

Solution 2:

A more concise answer follows using the $O_p$ and $o_p$ notation for stochastic orders. Your premise is that

$$\sqrt{n}(\theta_n-\theta_0)\overset{d}{\rightarrow}Z$$

where $Z$ is normally distributed with mean $0$ and variance $\sigma(\theta_0)$. Therefore

$$\theta_n-\theta_0=O_p\left(\frac{1}{\sqrt{n}}\right)\Rightarrow \theta_n-\theta_0= o_p(1)$$

hence $\theta_n-\theta_0\overset{p}{\rightarrow}0$, which finishes the proof.