The Central Limit Theorem states for a sequence of i.i.d. random variables $\{X_i\}$, $$\frac{\overline{X} - \mu}{\sigma/\sqrt{n}} \to N(0,1)$$ in distribution as $n \to \infty$. I saw in some lecture notes that this implies $$\overline{X} \to N\left(\mu,\frac{\sigma^2}{n}\right).$$ But does that statement make any sense? The right side is a function of $n$. What let's us move the $\sqrt{n}$ and derive this implication? I can see it heuristically and how it can be used in statistics exercises (use a large value of $n$).


Solution 1:

Taken literally, anything that says $\displaystyle\left[\lim_{n\to\infty} (\cdots\cdots) = \text{something depending on }n\right]$ is wrong, as is anything that says $\displaystyle \left[ (\cdots\cdots) \to\text{something depending on }n\text{ as }n\to\infty\right]$. However, some authors adopt a convention according to which $\displaystyle\left[ (\cdots\cdots) \to N\left( \mu, \frac{\sigma^2} n \right) \right]\text{ as }n\to\infty$ means the same thing as $$ \frac{\bar X-\mu}{\sigma/\sqrt n} \to AN(0,1), $$ where "$AN$" means "asymptotically normal". The expression $\sigma^2/n$ emphasizes the rate of convergence.

The question says "any sequence of i.i.d. random variables". That's not quite right: the most usual form assumes they have finite variance. If they all have the standard Cauchy distribution, for example, then the conclusion is false: the sample mean $\bar X$ still lacks a finite variance, and in fact has the same distribution as does any one of the random variables being averaged.

Someone said in the comments under the question "I guess they just meant $\bar X \sim N(\mu,\sigma^2/n)$." That, however, is not correct unless the distibution you start with is normal, and then it's not a limit theorem at all: it's not about what happens as $n\to\infty$.