Jensen's inequality; what's the need for the probability measure?
Jensen's inequality states that if $\mu$ is a probability measure on $\Omega$, $f$ is integrable function (on $\Omega$) and $\phi$ is convex on the range of f then:
$$\phi\left(\int_\Omega g(x)\,\mathrm d\mu\right)\leq\int_\Omega\phi\left[g(x)\right]\,\mathrm d\mu.$$
I wonder why $\mu \left( \Omega \right)$ has to be finite and equal to one? Why can't one let $\Omega$ = $\mathbb{R}$ for example?
If you consider $\mu$ to be an arbitrary finite measure and consider normalizing it to $\nu$, then applying the ordinary Jensen inequality to $\nu$, you get another version:
$$\phi \left ( \frac{1}{\mu(\Omega)} \int_\Omega g d \mu \right ) \leq \frac{1}{\mu(\Omega)} \int \phi \circ g d \mu.$$
In either case, it says applying a convex function to the average gives a smaller result than averaging the convex function itself.
The proof itself completely breaks down when $\mu(\Omega)=+\infty$, though.
You just have to consider $\mu$ be a finite measure, i.e. $\mu(\Omega)<\infty$, because if not you can´t apply the convexity of $\phi$. First let observe that without loss of generality we can consider $\mu(\Omega)=1$, because if $\mu(\Omega)<\infty$ then tou can take another measure that is normalized i.e. $d\nu(x)=\frac{d\mu(x)}{\mu(\Omega)}$. Also if $\phi$ is a convex function you have $\phi(y)\geq ay+b$ for $a,b$ constants, and if you fix $y_{0}$ you also have that $\phi(y_{0})=ay_{0}+b$, then $$\phi\left(\int_{\Omega}f(x)d\mu(x)\right)=a\int_{\Omega}f(x)d\mu(x)+b,$$ now using that $\mu(\Omega)=1$ you have $$a\int_{\Omega}f(x)d\mu(x)+b=\int_{\Omega}(af(x)+b)d\mu(x)\leq \int_{\Omega}\phi(f(x))d\mu(x).$$ So as you see if you don´t consider a finite measure, you can´t do this trick because you need to have that the integral of a constant is finite.