Solution 1:

Here is a counterexample: assume each $Y_i$ has distribution $\mathcal N(0,1)$, so that $\frac 1{\sqrt n}\sum_{i=1}^nY_i \sim \mathcal N(0,1)$.

Suppose for the sake of contradiction that there is some $X\sim \mathcal N(0,1)$ independent of $(Y_1,\ldots)$ with $\frac 1{\sqrt n}\sum_{i=1}^nY_i \xrightarrow{L^p}X$.

Since $X$ is independent of $(Y_1,Y_2,\ldots)$ the random variable $\frac 1{\sqrt n}\sum_{i=1}^nY_i - X$ has distribution $\mathcal N(0,2)$ thus

$$E\left[\left|\frac 1{\sqrt n}\sum_{i=1}^nY_i - X\right|^p\right]$$

is a non-zero constant. It does not converge to $0$ as $n\to \infty$, a contradiction.

Solution 2:

A summary of the brief discussion with Gabriel Romon:

Given any independent, identically distributed random variables $(Y_k)_{k\in\mathbb N}$ with $\mathsf EY_k=0, \mathsf VY_k=1$, if $X$ is independent of $(Y_k)_{k\in\mathbb N}$, then according to the Central Limit Theorem,* $-X+\frac1{\sqrt n}\sum_{k=1}^\infty Y_k$ converges in distribution to (slight abuse of notation ahead) $$\mathcal N(0,1)+\mathcal N(0,1)=\mathcal N(0,2)\neq0$$ as $n\to\infty$.

In particular, since convergence in $L^r$ implies convergence in distribution (this can be proven with Markov's inequality for example), it follows that $-X +\frac1{\sqrt n}\sum_{k=1}^\infty Y_k$ can't converge to $0$ in $L^r$.


* A more careful formulation: Let $Z_n\overset{\text{Def.}}=\frac1{\sqrt n}\sum_{k=1}^n Y_k, n\in\mathbb N$. For any $a,b\in\mathbb R$, we have, by independence and the Central Limit Theorem, $$\lim_{n\to\infty}\mathsf P(-X\le a\land Z_n\le b)=\lim_{n\to\infty}\mathsf P(-X\le a)\mathsf P(Z_n\le b)=\mathsf P(-X\le a)\mathsf P(\mathcal N(0,1)\le b).$$

Let $\mathsf P_{(-X,\mathcal N(0,1))}$ be the measure on $\mathbb R^2$ that is uniquely determined (exercise: proof that it is unique) by the condition $\mathsf P_{(-X,\mathcal N(0,1))}(A\times B)=\mathsf P(-X\in A)\mathsf P(\mathcal N(0,1)\in B)$ for all measurable $A,B\subset\mathbb R$.

Then by the argument before, $\mathsf P_{(-X,Z_n)}$ converges weakly (in the sense of measures) to $\mathsf P_{(-X,\mathcal N(0,1))}$. Let $h:\mathbb R^2\to\mathbb R, (x,y)\mapsto x+y$. Let $f:\mathbb R\to\mathbb R$ be any continuous, bounded function. Then $f\circ h:\mathbb R^2\to \mathbb R$ is continuous and bounded. By the Portmanteau Theorem, $$\lim_{n\to\infty}\int_{\mathbb R} f\,\mathrm d\mathsf P_{-X+Z_n}=\lim_{n\to\infty}\int_{\mathbb R} f\circ h\,\mathrm d\mathsf P_{(-X, Z_n)}=\int_{\mathbb R} f\circ h\,\mathrm d\mathsf P_{(-X, \mathcal N(0,1))}=\int_{\mathbb R} f\,\mathrm d\mathsf P_{\mathcal N(0,2)}.$$

Using Portmanteau again, I conclude that $\mathsf P_{-X+Z_n}\to\mathsf P_{\mathcal N(0,2)}$ weakly as $n\to\infty$, which is the same as saying that $-X+Z_n\to\mathcal N(0,2)$ in distribution. This completes the argument.