What rules guarantees that a variance is always positive?

Page 74 in pattern recognition and machine learning (free) gives the equation

$$\text{var}_\theta[\theta]\,=\,\mathbb{E}_\mathcal{D}[\text{var}_\theta[\theta\mid \mathcal{D}]] + \text{var}_{\mathcal{D}}[\mathbb{E}_\theta[\theta\mid \mathcal{D}]],\quad\quad\quad\quad\quad\quad(2.24)$$ and claims that the prior variance of $\theta$ (the term on the left-hand side, $\operatorname{var}_\theta[\theta]$) is always a positive quantity.

What rules guarantees this property?


A good way to see this is through Jensen's Inequality:

If $g(x)$ is convex, then $g(E[X])\leq E[g(X)]$.

Since $x^2$ is convex, $\,E[X^2] \geq E[X]^2$, and we know that $$\text{Var}[X] = E[X^2] - E[X]^2.$$

Thus, $\text{Var}[X] \geq 0$.


Note: As pointed out in the comments, $\text{Var}[X]$ can be $0$ iff $X=c$. Proof: If $Var(X)=0$ then is $X$ a constant?