Given a joining measure $\lambda$ on $X\times Y$, where $(X,\mu)$ and $(Y,\nu)$ are two probability measure space, let $\lambda=\int (\lambda_y \times \delta_y)\,d\nu(y)$ be the disintegration of $\lambda$ over $\nu$. We denote by $P_{\lambda}:L^2(X,\mu)\rightarrow L^2(Y,\nu)$ the conditional expectation operator given by $$(P_{\lambda}f)(y)=\mathbb{E}_{\lambda}^Y(f)(y)=\int_xf(x)\,d\lambda_y(x) \text{ for } \nu\text{-a.e.} y.$$ In Theorem $6.8$ of the book 'Ergodic theory via joining' by Glasner, the author says that if $\lambda=\mu\times_{\eta}\nu$, where $(Z,\eta)$ is the common factor of $(X,\mu)$ and $(Y,\nu)$ determined by $\lambda$, then the operator $P_{\lambda}:L^2(X,\mu)\rightarrow L^2(Y,\nu)$ is the projection onto $L^2(Z,\eta)$. My doubts are the following:

  1. What does the author mean by $(Z,\eta)$ to be the common factor of $(X,\mu)$ and $(Y,\nu)$ determined by $\lambda$?
  2. How can we show that if $\lambda=\mu\times_{\eta}\nu$, then $P_{\lambda}=P_Z$, i.e. the projection onto $L^2(Z,\eta)$? $\big($Note that, if $\mu=\int_z\mu_z\,d\eta (z)$ and $\nu=\int_z \nu_z\,d\eta (z)$ be the disintegrations of $\mu$ and $\nu$, respectively with respect to $\eta$, then $\mu\times_{\eta}\nu:=\int_z(\mu_z\times\nu_z)d\eta(z)$$\big)$.

Thanks in advance for any help.


Below I won't get into complete details. One important point is that since the spaces on which the group acts is "standard" from the measure theoretical point of view, morphisms between $\sigma$-algebras (or measure algebras) determine a.e.-unique morphisms of measure spaces, and conditional expectations can be upgraded to conditional measures. Thus the standardness assumption allows one to transition between multiple categories with minimal hassle.


  1. According to Thm.6.6 on p.129, any joining $\lambda\in \mathbb{J}(\mathfrak{X},\mathfrak{Y})$ of two systems $\mathfrak{X}=(X,\mathcal{X},\mu,\Gamma)$ and $\mathfrak{Y}=(Y,\mathcal{Y},\nu,\Gamma)$ determines two invariant sub-$\sigma$-algebras $\mathcal{L}\leq \mathcal{X}$ and $\mathcal{R}\leq \mathcal{Y}$ such that $\mathcal{L}\times Y=_\lambda X\times \mathcal{R}$ (as Boolean sub-$\sigma$-algebras of $\mathcal{X}\otimes\mathcal{Y}$). More precisely, $\mathcal{L}$ and $\mathcal{R}$ are defined like so:

$$\mathcal{L}=\{A\in\mathcal{X}\,|\, \exists B\in\mathcal{Y}: \mu((A\times Y)\triangle (X\times B))=0\},$$

$$\mathcal{R}=\{B\in\mathcal{Y}\,|\, \exists A\in\mathcal{X}: \mu((A\times Y)\triangle (X\times B))=0\}.$$

(In my opinion it's useful to draw a caricature here. A square represents $X\times Y$, and by way of a joining $\lambda$ we can replace vertical strips (i.e. elements of $\mathcal{L}\times Y$) with the horizontal strips (i.e. elements of $X\times \mathcal{R}$).)

This gives an isomorphism of Boolean $\sigma$-algebras $\mathcal{L}\to\mathcal{R}$. Since the all the spaces involved are "standard" $\mathcal{L}$ determines a factor $\mathfrak{L}$ of $\mathfrak{X}$ and likewise $\mathcal{R}$ determines a factor $\mathfrak{R}$ of $\mathfrak{Y}$, and the isomorphism $\mathcal{L}\to\mathcal{R}$ gives an isomorphism $\mathfrak{L}\to\mathfrak{R}$ of systems. $\mathfrak{Z}_\lambda=(Z_\lambda,\mathcal{Z}_\lambda,\xi_\lambda,\Gamma)$ is an anonymous representative from the isomorphism class of $\mathfrak{L}$ (which is the same as the isomorphism class of $\mathfrak{R}$). $\mathfrak{Z}_\lambda$ is called the common factor of $\mathfrak{X}$ and $\mathfrak{Y}$ determined by the joining $\lambda\in\mathbb{J}(\mathfrak{X},\mathfrak{Y})$.

Note that Thm.6.5 gives the analogous statement with $\mathcal{L}$ replaced with the whole $\mathcal{X}$ and $\mathcal{R}$ replaced with the whole $\mathcal{Y}$; in this case the systems themselves are isomorphic, instead of having isomorphic factors.


  1. Let us consider the category of systems (or the standard probability spaces, resp.), with the arrows equivariant measurable measure-preserving maps (or arrows measurable measure-preserving maps, resp.). For readability let us suppress everything but the space (= point set = largest element of the associated $\sigma$-algebra = spatial model) and the associated measure, unless it is necessary to be explicit. Let us also denote by $L^0$ the space of real-valued measurable functions identified a.e. according to the prescribed measure. If $\phi: (X,\mu)\to (Y,\nu)$ is an arrow, on function spaces we have two maps: one is the categorical pullback ( = Koopman operator) $\overleftarrow{\phi}: L^0(Y,\nu)\to L^0(X,\mu)$. The other is the conditional expectation $\phi_!: L^0(X,\mu)\to L^0(Y,\nu)$. More explicitly $\phi: (X,\mu)\to (Y,\nu)$ determines a sub-$\sigma$-algebra $\overleftarrow{\phi}(\mathcal{Y})\leq \mathcal{X}$, which in turn determines a conditional expectation operator $\mathbb{E}_\mu\left(\cdot\,\left|\, \overleftarrow{\phi}(\mathcal{Y})\right.\right): L^0(X,\mathcal{X},\mu)\to L^0\left(X,\overleftarrow{\phi}(\mathcal{Y}),\left.\mu\right\vert_{\overleftarrow{\phi}(\mathcal{Y})}\right)$. Then the following diagram defines $\phi_!$ unambiguously:

    enter preformatted text here

What is more, $\phi_!$ is the left inverse to $\overleftarrow{\phi}$, i.e. $\phi_!\circ \overleftarrow{\phi}=\operatorname{id}_{L^0(Y,\nu)}$.

Let us note the relationship between $\phi_!$ and the disintegration of $\mu$ along $\phi$. If $f\in L^0(X,\mu)$, then

$$\int_X f(x)d\mu(x) = \int_Y \int_X f(x)\, d\mu_y(x) \, d\nu(y) = \int_Y \phi_!(f)(y)\, d\nu(y),$$

that is, $\phi_!(f)(y)=\int_X f(x)\,d\mu_y(x)$ for $y\in Y$ a typical point.

Just as $\overleftarrow{\phi\circ \psi}=\overleftarrow{\psi}\circ\overleftarrow{\phi}$, we have $(\phi\circ\psi)_!=\phi_!\circ \psi_!$. Still, from the categorical point of view conditional expectation is a wrong-way map.

Next let us consider an anonymous square in the category of systems (or in the category of standard probability spaces):

enter image description here

Applying the pullbacks and conditional expectations we get the following squares:

enter image description here

To go from $L^0(X,\mu)$ to $L^0(Y,\nu)$ we can use pullbacks and conditional expectations in tandem, and indeed, Glasner's $P_\lambda$ is exactly (the $L^2$ analog of) $\chi_!\circ \overleftarrow{\phi}$, and the statement that $P_\lambda=P_Z$ is exactly (the $L^2$ analog of) $\chi_!\circ \overleftarrow{\phi}=\overleftarrow{\omega}\circ \psi_!$, that is, as a diagram,

enter image description here

The common factor defined by a joining construction mentioned in the previous section says that if in an anonymous square the bottom right corner is missing it can be filled in:

enter image description here

Likewise, the relatively independent joining construction you mention says that if in an anonymous square the top left corner is missing it can be filled in:

enter image description here

Further, in this case by Exr.6.7.1 on p.129 we'll have that $(Z,\xi)\cong (Z_{\mu\times_\xi \nu},\xi_{\mu\times_\xi \nu})$, i.e. up to isomorphism $(Z,\xi)$ is the common factor of $(X,\mu)$ and $(Y,\nu)$ determined by the relatively independent joining $\mu\times_\xi \nu$.

With all this background we can rewrite Thm.6.8 of Glasner like so:

Theorem: Consider an anonymous square in the category of systems:

enter image description here

Then $(W,\lambda)\cong (X\times Y, \mu\times_\xi \nu)$ and $(Z,\xi)\cong (Z_\lambda,\xi_\lambda)$ iff $\chi_!\circ \overleftarrow{\phi}=\overleftarrow{\omega}\circ \psi_!$.

(You asked only about the necessity so I'll mention details for that only, if needed I can add details for sufficiency also, with this formalism.)

Proof of necessity: To show necessity assume $\lambda=\mu\times_\xi \nu$ and fix an $f\in L^0(X,\mu)$. We claim that $\chi_!\circ \overleftarrow{\phi}(f)=_\nu\overleftarrow{\omega}\circ \psi_!(f)$. At a typical $y\in Y$, starting from the LHS we have:

\begin{align*} \chi_!\circ \overleftarrow{\phi}(f)(y) &=\int_{W} f\circ \phi(w) \,d(\mu\times_\xi \nu)_y(w) =\int_X f(x)\, d\mu_{\omega(y)}(x)\\ &=\psi_!(f)\circ \omega(y) =\overleftarrow{\omega}\circ \psi_!(f)(y). \end{align*}

Here the commutativity of the diagram is used in the second equality.