Why is the total variation of a complex measure defined in this way?

I have the following question about the meaning of the total variation measure. If $(X,\mathcal{A})$ is a measurable space and $\nu$ is a signed measure, the total variation of $\nu$ is defined as $|\nu|=\nu^++\nu^-$ and this gives us an intuition that $|\nu|$ measures "how much charge" has been put on a set, no matter that some of it is positive and some of it is negative, excluding one another. Now if $\mu=\mu_1+i\mu_2$ is a complex measure with $\mu_1, \mu_2$ being signed finite measures, the total variation of $\mu$ is defined as follows:$$|\mu|(E)=\sup\{\sum_{j=1}^{n}|\mu(E_j)|: (E_j)_{j=1}^{n}\text{are disjoint and in } \mathcal{A}, \bigcup_{j=1}^{n}E_j=E\}$$

My intuition would be to define $|\mu|$ as $|\mu_1|+|\mu_2|$, a measure that is able to count the total charge that has been assigned to a set by $\mu_1$ and $\mu_2$ both. If someone could explain the reasoning behind the complex total variation definition, I would be grateful.


Solution 1:

The problem is that $|\cdot|$ is not additive. Consider the complex measure $$ \mu=(a+ib)\delta.$$ Whatever the definition of $|\mu|$ is, I expect that, in this case, it should boil down to $|\mu|=\sqrt{a^2+b^2}$. With your expected definition $|\mu|:=|\mu_1|+|\mu_2|$, however, we would get $|\mu|=|a|+|b|$.


This example might suggest the definition $|\mu|:= \sqrt{ |\mu_1|^2+|\mu_2|^2}.$ This is not good either. If $\mu=(a+ib)\delta_x + (c+id)\delta_y$, with $x\ne y$, then we expect $$ |\mu|=\sqrt{a^2+b^2}+\sqrt{c^2+d^2}.$$ With the above definition we would instead have $|\mu|=\sqrt{(a+c)^2+(b+d)^2}$.

Solution 2:

I do not know exactly how the answer needs to be, but I asked the following question in the community:

Is it possible to display an example where a set E is written as two different partitions that generate different sums? that is, it's possible to write $ E = \bigsqcup_{i=1}^{\infty} E_i = \bigsqcup_{i=1}^{\infty} F_i $, where $ \{ {E}_i \}_{i \in \mathbb N} $ and $ \{ {F}_i \}_{i \in \mathbb N} $ are different partitions of $ E $ such that

$ \sum_{i=1}^{\infty} |\mu (E_i)| \neq \sum_{i=1}^{\infty} |\mu (F_i)| $?

the user user284331 displays the following counterexample for me:

With $\mu(A)=\displaystyle\int_{A}x^{3}dx$, we compute that $\mu([0,1])=1/4$, $\mu([-1,0))=-1/4$, $\mu([1/2,1])=15/64$, $\mu([-1,1/2))=-15/64$.

I believe it is for this reason that the definition of total variation behaves this way, but I am not totally sure.