Decomposing $y_n$ into $E(y_n|x_n)+\epsilon_n$. Under what condition is $\epsilon_n=o_p(1)$?

Let $y_n,x_n$ denote two sequences of random variables. Define $$ y_n=\operatorname{E}(y_n\ |\ x_n)+\epsilon_n $$ Does $\operatorname{var}(y_n\ |\ x_n)=o_p(1)\implies \epsilon_n=o_p(1)$?

I tried to use the law of total variance. But then I need uniform integrability to prove the above. I am hoping there is another way which does not require stronger assumption.


Solution 1:

An idea. We can use Chebyshev inequality conditioned on $x_n$, reading:

$P({|y_n-E[y_n|x_n]|>\epsilon}|x_n)\le \frac{Var(y_n|x_n)}{\epsilon^2}$

Now take expecations w.r.t. $x_n$ (e.g. multiply by $p(x_n)$ and integrate):

$P({|y_n-E[y_n|x_n]|>\epsilon})\le \frac{E[Var(y_n|x_n])}{\epsilon^2}$

If we can justify that $E[Var(y_n|x_n)]\rightarrow 0$ for n large than we are done, even if maybe we need some more additional hypothesis to finish this way. You have an hypothses that $Var(y_n|x_n)$ goes to zero in probability but I am not sure this always also implies that the expected value goes to zero.

Solution 2:

You have $$\operatorname{Var}(y|x)=\mathbb{E}((y-\mathbb{E}(y|x))^2|x)=\mathbb{E}(\epsilon^2|x)$$

Here $\epsilon$ is zero mean. Assume $\operatorname{Var}(y|x)=0$ with probability 1. Taking unconditional expectations and using the law of iterated expectations this implies $E(\epsilon^2)=0$ and $\operatorname{Var}(\epsilon)=0$. Hence $\epsilon=0$ with probability 1.

Solution 3:

$E(y_n|x_n)$ is just a function of $x_n$ (we'll call it $\mu(x_n)$), and computing $\text {var}(\cdot|x_n)$ treats $x_n$ as a constant, so we can say

$\text {var}(y_n|x_n)=\text {var}(\mu(x_n)+\varepsilon_n|x_n)=\text {var}(\varepsilon_n|x_n)$,

so $\text {var}(y_n|x_n)=o_p(1)\iff \text {var}(\varepsilon_n|x_n)=o_p(1)$.

Not sure we can say $\varepsilon_n=o_p(1)$ but we can say it has mean zero:

$E[y_n]=E[\mu(x_n)]+E[\varepsilon_n]\underbrace{=}_{\text {law of iter. exp.} }E[y_n]+E[\varepsilon_n]\implies E[\varepsilon_n]=0$ .