Measure theory convention that $\infty \cdot 0 = 0$

In the preface of Terry Tao's notes on measure theory he states that in the extended real number setting we adopt the convention that $\infty \cdot 0 = 0 \cdot \infty = 0.$

He explains that it's a useful convention which makes it natural to define integration from below (e.g. integrals as supremums of nonnegative simple functions). He seems to be saying, "Let's define it this way, because it makes our lives simpler." Is more justification not needed?

This 'fact' helped me during my midterm today in showing that $\mu(E \times \mathbb{R})=0$ for any measure-zero set $E$, but I felt sleazy using it.

Any thoughts (philosophical or otherwise) on why or how we can do this and still have self respect as mathematicians?


You need to distinguish between facts and convention. A convention is useful if it helps to organize a number of facts. In the context of measure theory, there are several facts that have the appearance of "$0\times \infty =0$" that are proven from the definitions, and thus the convention allows for an easier way to state facts.

A couple of examples:

  1. If $\mu(X)=0$ and $f(x)=+\infty$ for all $x\in X$, then the definition of $\int_X f\,d\mu$ leads to a value of $0$. On the other hand, this could be thought of as "$\mu(X)\times \infty=0\times \infty$", integrating a constant function.
  2. If $f(x)=0$ for all $x\in X$ and $\mu(X)=\infty$, then the definition of $\int_X f\,d\mu$ leads to a value of $0$. On the other hand, this could be thought of as "$\mu(X)\times 0=\infty\times 0$", integrating a constant function.

As a consequence of 1. and 2. above, adopting the convention $0\times \infty=0$ (along with the more intuitive $a\times \infty=\infty$ if $a>0$) allows the following result to be stated:

$$\int_{X} C\,d\mu=\mu(X)\times C,$$

for all constants $C\in[0,\infty]$ regardless of the size of $\mu(X)$. The convention is used only in stating and using the result in condensed form, not in proving it.

Another example that you highlight: If $E$ has measure zero, so does $E\times \mathbb R$. This is not proved using the convention. I would think of it as $$\mu(E\times \mathbb R)=\mu\left(\bigcup_n E\times [-n,n]\right)\leq\sum_n\mu(E\times[-n,n])=\sum_n 0=0,$$ where the equation $\mu(E\times[-n,n])=0$ requires justification, but nothing with infinite measure is involved.


Another good motive: do you find reasonable the next equalities? $$0=\int_X 0\,d\mu=0\,\mu(X).$$