Conditional expectation given an event is equivalent to conditional expectation given the sigma algebra generated by the event

Solution 1:

For absolute clarity, I will add to the above answers, even though this is basically a repeat of above. Fix a probability space $(\Omega, \mathcal{H}, P)$. For any event $H\in\mathcal{H}$, we can define the $\sigma$-field generated by $H$ to be the smallest $\sigma$-field containing $H$, i.e., $$\sigma(H) := \{\emptyset, H, H^c, \Omega\}.$$ If, in addition, $H$ is such that $P(H)>0$, we can formally define $$E(X\,\vert\, H) := \frac{1}{P(H)}\int_H X\,dP = \frac{E(X1_H)}{P(H)}.$$ Now, the question also has another mathematical object $E(X\,\vert\, \mathcal{F})$. By definition, for any sub-$\sigma$-field $\mathcal{F}\subseteq \mathcal{H}$, $E(X\,\vert\, \mathcal{F})$ is defined to be any $\mathcal{F}$-measurable random variable such that $$E(X1_A) \equiv \int_A X\,dP = \int_A E(X\,\vert\, \mathcal{F})\,dP\equiv E(E(X\,\vert\, \mathcal{F})1_A) \quad\quad(A \in \mathcal{F}).$$ One can prove that $E(X\,\vert\, \mathcal{F})$ exists and is almost surely unique (e.g., see Page 222 in Probability: Theory and Examples by Rick Durrett).

With the definitions in place, the question is asking to prove (for $H\in\mathcal{H}$ such that $P(H)>0$) that $$E(X\,\vert\, \mathcal{F})(\omega) = E(X\,\vert\, H)\quad \text{for almost all } \omega\in H,$$ where $\mathcal{F} := \sigma(H)$. By following the definition of $E(X\,\vert\,\mathcal{F})$, you can pretty easily show that (I can put details if necessary) $$E(X\,\vert\,\mathcal{F}) = E(X\,\vert\, H)1_H + E(X\,\vert\, H^c)1_{H^c}\quad \text{almost surely, if } 0<P(H)<1,$$ and that $$E(X\,\vert\,\mathcal{F}) = E(X\,\vert\, H)1_H\quad \text{almost surely, if } P(H)=1.$$ In either case, we find that $$E(X\,\vert\,\mathcal{F})(\omega) = E(X\,\vert\, H)\quad \text{for almost all }\omega\in H.$$ (A technical point is that it is not necessarily true that $E(X\,\vert\, \mathcal{F})(\omega) = E(X\,\vert\, H)$ for all $\omega\in H$. It is only true for almost all $\omega\in H$, i.e., for all $\omega\in H-N$, where $N\in\mathcal{H}$ is such that $P(N)=0$. The representative we chose for $E(X\,\vert\, \mathcal{F})$ actually equals $E(X\,\vert\, H)$ for all $\omega\in H$, but we could have easily picked a different representation that disagrees with the above on a measure-zero set.)

More generally, if $\mathcal{P} = \{H_1,H_2,\dots\}$ is countable partition of $\Omega$ (i.e., $H_i\cap H_j=\emptyset$ for all $i\ne j$ and $\bigcup_{i=1}^{\infty} H_i = \Omega$) such that $P(H_i)>0$ for all $i\ge 1$, in a similar manner one can show that $$E(X\,\vert\,\mathcal{F})(\omega) = E(X\,\vert\, H_i)\quad\text{for almost all }\omega\in H_i,$$ where $\mathcal{F}:= \sigma(H_1,H_2,\dots)$ is the smallest $\sigma$-field containing all the elements of $\mathcal{P}$.

The result basically says that if the collection of all of the information (i.e., $\mathcal{F}$) comes from distinct pieces of information (i.e., the $H_i$) then the "best guess" for $X$ on $H_i$ (i.e., $E(X\,\vert\,\mathcal{F})(\omega)$ for $\omega\in H_i$) is the average of $X$ over $H_i$ (i.e., $E(X\,\vert\, H_i$)). While I believe the word "intuition" is wildly overused in mathematics, this result seems to be relatively "intuitive".

Solution 2:

Here is the problem stated in full context:

Let $(\Omega, \mathcal H,\mathbb P)$ be a probability space. Let $H\in\mathcal H$ and let $\mathcal F:=\sigma(H) = \{\varnothing, H, H^c, \Omega\}$. Show that $$\mathbb E[X\mid \mathcal F](\omega) = \mathbb E[X\mid H] $$ for all $\omega\in H$.

The conditional expectation of $X$ given the event $H$ is defined in the text by $$\mathbb E[X\mid H] = \frac1{\mathbb P(H)}\int_H X\ \mathsf d\mathbb P = \frac{\mathbb E[X\mathsf 1_H]}{\mathbb P(H)}. $$ By the general definition of conditional expectation it follows that $$\mathbb E[\mathbb E[X\mid\mathcal F]\mathsf 1_H]=\mathbb E[X\mathsf 1_H]=\mathbb E[X\mid H]\mathbb P(H), $$ so if $\omega\in H$ then $$\mathbb E[\mathbb E[X\mid\mathcal F](\omega)\mathsf 1_H] = \mathbb E[X\mid\mathcal F](\omega)\mathbb P(H) = \mathbb E[X\mid H]\mathbb P(H), $$ from which we conclude that $$\mathbb E[X\mid\mathcal F](\omega) = \mathbb E[X\mid H].$$