The limit of a convergent Gaussian random variable sequence is still a Gaussian random variable

  • First, we note that the sequence $\{\sigma_n\}$ and $\{\mu_n\}$ has to be bounded. It's a consequence of what was done in this thread, as we have in particular convergence in law. What we use is the following:

If $(X_n)_n$ is a sequence of random variables converging in distribution to $X$, then for each $\varepsilon$, there is $R$ such that for each $n$, $\mathbb P(|X_n|\geqslant R)\lt \varepsilon$ (tightness).

To see that, we assume that $X_n$ and $X$ are non-negative (considering their absolute values). Let $F_n$, $F$ the cumulative distribution function of $X_n$, $X$. Take $t$ such that $F(t)\gt 1-\varepsilon$ and $t$ is a continuity point of $F$. Then $F_n(t)\gt 1-\varepsilon$ for $n\geqslant N$ for some $N$. And a finite collection of random variables is tight.

  • Now, fix an arbitrary strictly increasing sequence $\{n_k\}$. We extract further sub-sequences of $\{\sigma_{n_k}\}$ and $\{\mu_{n_k}\}$, which converge respectively to $\sigma$ and $\mu$. Taking the modulus, we can see that $e^{-\sigma^2/2}=|\varphi_X(1)|$, so $\sigma$ is uniquely determined.
  • We have $e^{it\mu}=\varphi_X(t)e^{t\sigma^2/2}$ for all $t\in\Bbb R$, so $\mu$ is also completely determined.

Although this question is old and it has a perfect answer already, I provide here a slightly different proof. A proof which mainly shows the convergence of $\mu_n$ in a funny way (which is the whole point of writing this).

Notice first that we have the existence and finiteness of the limit of $\phi_{X_n}$ and therefore using continuity of $\log|\cdot|$ we also find the existence and finiteness of the limit \begin{align} \lim_{n\to\infty}-2\log|\phi_{X_n}(1)|=\lim_{n\to\infty}\sigma_n^2 \end{align} which we will call $\sigma^2$ (note that it might be $0$).

To show that $\mu_n$ is bounded, we assume that is unbounded so that there is unbounded subsequence $\mu_{n_k}$ \begin{align*} \lim_{k\to\infty}F_{X_{n_k}}(x)=\lim_{k\to\infty}\int^x_{-\infty}\frac{1}{\sqrt{2\pi\sigma_{n_k}^2}}e^{-\frac{(t-\mu_{n_k})^2}{2\sigma_{n_k}^2}}\,dt = \lim_{k\to\infty}\int^\infty_{-\infty}\mathbf{1}\left\{r\leq \frac{x-\mu_{n_k}}{\sqrt{2\sigma_{n_k}^2}}\right\}e^{-r^2}\,dr \end{align*} Using DCT we get either $0$ or $1$ which contradicts the fact that $F_{X_n}$ converges to a CDF. So $\mu_n$ is bounded.

By considering the convergence of the limit of $\phi_{X_n}(t)e^{\sigma_n^2t^2/2}$ we conclude the convergence of $$\lim_{n\to\infty}\cos(\mu_nt) \ \ \ \text{ and } \ \ \ \lim_{n\to\infty} \sin(\mu_nt) $$ for all $t\in\mathbb R$.

But now we consider the convergence of the following two integrals using DCT* \begin{align} \color{red}{\lim_{n\to\infty}\int_\mathbb R\frac{\cos(\mu_nt)}{t^2+1}\,dt=\lim_{n\to\infty}\pi e^{-|\mu_n|}} \ \ \ \text{ and } \ \ \ \color{blue}{\lim_{n\to\infty}\int_\mathbb R\frac{\sin(\mu_nt)}{t(t^2+1)}\,dt = \lim_{n\to\infty}\pi (1-e^{-|\mu_n|})\text{sgn}(\mu_n)} \end{align} But now it is straightforward to see that the convergence of both can happen iff $\mu_n$ itself converges. First deduce the convergence of $|\mu_n|$ and then split cases where the limit is zero or nonzero. For the nonzero case we can deduce the convergence of $\text{sgn}(\mu_n)$ using the convergence of $(1-e^{-|\mu_n|})^{-1}$ and the blue limit. So $\mu_n=\text{sgn}(\mu_n)|\mu_n|\to\mu$ for some $\mu\in\mathbb R$. This then implies that $X$ is normally distributed with parameters $\mu$ and $\sigma^2$.

*: The dominating function of the blue integral can be derived through $|\sin(\mu_nt)|\leq |\mu_nt|\leq \sup_n |\mu_n||t|$.