Is the mean of the truncated normal distribution monotone in $\mu$?
I am wondering whether the mean of the truncated normal distribution is always increasing in $\mu$. The untruncated distribution of $x$ is $\mathcal{N}(\mu,\sigma^2)$. The mean of the truncated distribution is given by $$E[x|a\le x\le b]=\mu+\frac{\phi(\alpha)-\phi(\beta)}{\Phi(\beta)-\Phi(\alpha)}\sigma,$$ where $\alpha=(a-\mu)/\sigma, \beta=(b-\mu)/\sigma$. Taking the derivative and simplifying somewhat, we get $$\frac{\partial E[x|a\le x\le b]}{\partial \mu}=1+\frac{[-\phi'(\alpha)+\phi'(\beta)][\Phi(\beta)-\Phi(\alpha)]-[\phi(\alpha)-\phi(\beta)]^2}{[\Phi(\beta)-\Phi(\alpha)]^2}.$$ Thus, for example, if $\phi(\beta)>>\phi(\alpha)$ the last term in the numerator is negative, but then $[-\phi'(\alpha)+\phi'(\beta)]$ should be positive, since $\alpha$ must be in the left tail while $\beta$ is close to the mean.
I am not sure if there are $a,b,\mu,\sigma$ such that this derivative can become negative. In some numerical calculations I did not find such a profile. Can you find an example, or can you instead prove that it is always nonnegative?
Solution 1:
To "address all the concerns" we prove two results: first that the derivative of the truncated mean w.r.t. $\mu$ has an upper bound of unity. Second that this derivative is equal to the ratio of the truncated variance over the untruncated variance (and hence is always positive). $$\text{Result A:} \frac{\partial E[x|a\le x\le b]}{\partial \mu} \le 1$$
We will use results related to log-concave functions (i.e. functions whose logarithm is a concave function). Please see link and references therein. Obviously, $\beta\,\gt\alpha$. Define then the non-negative bi-variate function $F(\beta,\alpha)\equiv {\Phi(\beta)-\Phi(\alpha)}$. It can be equivalently written as $$F(\beta,\alpha)\,=\,\int_{\alpha}^{\beta} \phi(\tau)d\tau=\,\int_{-\infty}^{\infty} \phi(\tau)I(\alpha\le\tau\le \beta)d\tau$$ where $I()$ is the indicator function. The indicator function is log-concave. The standard normal pdf $\phi$ is log-concave. The product of two log-concave functions is log-concave. If we integrate a log-concave function over one of its arguments, the resulting function is log-concave w.r.t. to the remaining variables. So $F(\beta,\alpha)$ is log-concave in $(\beta,\alpha)$ (this is a reproduction of a proof found in Pratt, J. W. (1981). Concavity of the log likelihood. Journal of the American Statistical Association, 76(373), 103-106.) Consider now the uni-variate function $$\ln H(\mu)=\ln F(\beta(\mu),\alpha(\mu))$$ Note that both $\beta$ and $\alpha$ are linear functions of $\mu$. Then $\ln H(\mu)$ is a concave function in $\mu$ (see for example p. 86 eq. [3.15], Boyd & Vandenberghe (2004). Convex optimization, noting that the 2nd derivatives of $\beta$ and of $\alpha$ w.r.t. $\mu$ are zero). Now
$$\frac{\partial \ln H(\mu)}{\partial \mu}\,=\,\frac{\phi(\beta)-\phi(\alpha)}{\Phi(\beta)-\Phi(\alpha)} \left(\frac{-1} {\sigma}\right)= \frac{\phi(\alpha)-\phi(\beta)}{\Phi(\beta)-\Phi(\alpha)} \left(\frac{1} {\sigma}\right)$$ Multiply throughout by $\sigma^2:$ $$\sigma^2\frac{\partial \ln H(\mu)}{\partial \mu}\,= \frac{\phi(\alpha)-\phi(\beta)}{\Phi(\beta)-\Phi(\alpha)} \sigma$$
Then we can write
$$E[x|a\le x\le b]=\mu+\sigma^2\frac{\partial \ln H(\mu)}{\partial \mu}\tag{1}$$
and therefore
$$\frac{\partial E[x|a\le x\le b]}{\partial \mu}=1+\sigma^2\frac{\partial^2 \ln H(\mu)}{\partial \mu^2}\tag{2}$$
Since $\ln H(\mu)$ is concave, its second derivative is non-positive. So we have established: $$\frac{\partial E[x|a\le x\le b]}{\partial \mu}\le 1 \qquad \forall \;(\mu,\sigma,a,b, a \lt b) \tag{3}$$
Intuitively, the mean of the truncated distribution never changes as much as the underlying location parameter (don't expect the equality to hold).
$$\text{Result B:} \frac{\partial E[x|a\le x\le b]}{\partial \mu}=\frac{\operatorname{Var}_{tr}(x)}{\sigma^2}\gt 0$$
For compactness we will use the following shorthands: ${\Phi(\beta)-\Phi(\alpha)}\equiv Z$, which is a function of $\mu$, $E[x|a\le x\le b]\equiv E_{tr}(x)$, $\frac{\partial \ln H(\mu)}{\partial \mu}\equiv h'$. Expressing the truncated mean in integral form we have $$E_{tr}(x)=\int_a^bx\frac{1}{Z\sigma}\phi\left(\frac{x-\mu}{\sigma}\right)dx $$
$$\Rightarrow \frac{\partial E_{tr}(x)}{\partial \mu} = \int_a^bx\frac{1}{Z\sigma}\phi'\left(\frac{x-\mu}{\sigma}\right)\left(\frac{-1}{\sigma}\right)dx\;+\;\left[\phi(\beta)-\phi(\alpha)\right]\frac{1}{Z\sigma}\int_a^bx\frac{1}{Z\sigma}\phi\left(\frac{x-\mu}{\sigma}\right)dx$$
Now $$\phi'\left(\frac{x-\mu}{\sigma}\right)=(-1)\left(\frac{x-\mu}{\sigma}\right)\phi\left(\frac{x-\mu}{\sigma}\right)$$ Also, the last integral equals $E_{tr}(x)$, while $\left[\phi(\beta)-\phi(\alpha)\right]\frac{1}{Z\sigma}=-h'$. So we have
$$\frac{\partial E_{tr}(x)}{\partial \mu} = \left(\frac{1}{\sigma^2}\right)\int_a^bx\frac{1}{Z\sigma}\left(x-\mu\right)\phi\left(\frac{x-\mu}{\sigma}\right)dx\;-\;h'E_{tr}(x) \tag{4}$$
Denote the remaining integral $I$ and break it in two: $$\left(\frac{1}{\sigma^2}\right)I=\left(\frac{1}{\sigma^2}\right)\int_a^bx^2\frac{1}{Z\sigma}\phi\left(\frac{x-\mu}{\sigma}\right)dx\;-\;\left(\frac{\mu}{\sigma^2}\right)\int_a^bx\frac{1}{Z\sigma}\phi\left(\frac{x-\mu}{\sigma}\right)dx$$ The first integral is the second raw moment of the truncated distribution, while the second is $E_{tr}(x)$. So $$\left(\frac{1}{\sigma^2}\right)I=\left(\frac{1}{\sigma^2}\right)E_{tr}(x^2)\;-\;\left(\frac{\mu}{\sigma^2}\right)E_{tr}(x)$$ Inserting into eq.(4) we have
$$\frac{\partial E_{tr}(x)}{\partial \mu} = \left(\frac{1}{\sigma^2}\right)E_{tr}(x^2)\;-\;\left(\frac{\mu}{\sigma^2}\right)E_{tr}(x)\;-\;h'E_{tr}(x)$$
$$\Rightarrow\frac{\partial E_{tr}(x)}{\partial \mu} = \left(\frac{1}{\sigma^2}\right)\left[E_{tr}(x^2)\;-\;\mu E_{tr}(x)\;-\;h'\sigma^2E_{tr}(x)\right]$$
$$\Rightarrow\frac{\partial E_{tr}(x)}{\partial \mu} = \left(\frac{1}{\sigma^2}\right)\left[E_{tr}(x^2)\;-\;\left(\mu +h'\sigma^2\right)E_{tr}(x)\right]$$
From eq.$(1)$ we have $\mu +h'\sigma^2=E_{tr}(x)$. Substituting we obtain $$\Rightarrow\frac{\partial E_{tr}(x)}{\partial \mu} = \left(\frac{1}{\sigma^2}\right)\left[E_{tr}(x^2)\;-\;\left(E_{tr}(x)\right)^2\right]=\frac{\operatorname{Var}_{tr}(x)}{\sigma^2}\gt 0$$
which is what we wanted to prove.
Solution 2:
This is not a full answer. I thought I'd try and see how far one could get for an arbitrary probability distribution function $f$.
Truncated to $[a,b]$, the mean is $$m=\frac{\int_a^bxf(x)\,\mathrm dx}{\int_a^bf(x)\,\mathrm dx}=:\frac pq,$$ with $$\begin{align} p&=\int_a^bxf(x)\,\mathrm dx,\\ q&=\int_a^bf(x)\,\mathrm dx. \end{align}$$ We want to translate $f$ to the right by some $\mu$. Shifting the origin to $\mu$, this is equivalent to translating the bounds $[a,b]$ to the left, and we just have to add $\mu$ to the new mean to account for the change in origin. So $$m(\mu) = \mu + \frac{\int_{a-\mu}^{b-\mu}xf(x)\,\mathrm dx}{\int_{a-\mu}^{b-\mu}f(x)\,\mathrm dx}=:\mu+\frac{p(\mu)}{q(\mu)},$$ and $$m'=1+\frac{p'q-pq'}{q^2}=\frac1q\Big(q+p'-\frac pqq'\Big),$$ where the prime indicates differentiation by $\mu$. Of course $$\begin{align} p'&=af(a)-bf(b),\\ q'&=f(a)-f(b), \end{align}$$ and $p/q=m$ at $\mu=0$, in which case $$\begin{align} m'|_{\mu=0}&=\frac1q\Big(q+af(a)-bf(b)-m\big(f(a)-f(b)\big)\Big)\\ &=\frac1q\Big(q-(m-a)f(a)-(b-m)f(b)\Big). \end{align}$$ (We can assume $\mu=0$ with no loss of generality; just consider a shifted $f$ instead.)
This has a nice geometric interpretation. In the figure below, $q$ is the area below the blue curve while $(m-a)f(a)+(b-m)f(b)$ is the area below the red one. So $m'$ is positive if and only if the blue area is greater than the red. I don't know how to prove it though. It's certainly not true for an arbitrary probability distribution, even assuming it is unimodal.
Solution 3:
If we look at $$\frac{\partial E[x|a\le x\le b]}{\partial \mu}=1+\frac{[-\phi'(\alpha)+\phi'(\beta)][\Phi(\beta)-\Phi(\alpha)]-[\phi(\alpha)-\phi(\beta)]^2}{[\Phi(\beta)-\Phi(\alpha)]^2}$$
then $$\frac{[-\phi'(\alpha)+\phi'(\beta)][\Phi(\beta)-\Phi(\alpha)]-[\phi(\alpha)-\phi(\beta)]^2}{[\Phi(\beta)-\Phi(\alpha)]^2} \leq \frac{[-\phi'(\alpha)+\phi'(\beta)][\Phi(\beta)-\Phi(\alpha)]}{[\Phi(\beta)-\Phi(\alpha)]^2} $$
It seems that it suffices to show (or not show) that $$ \left|\frac{[-\phi'(\alpha)+\phi'(\beta)][\Phi(\beta)-\Phi(\alpha)]}{[\Phi(\beta)-\Phi(\alpha)]^2} \right| < 1$$
Now $(\Phi(\beta)-\Phi(\alpha))^{2}$ could potentially be a very small number. This simplifies to $$ \left|\frac{[\phi'(\beta)-\phi'(\alpha)}{[\Phi(\beta)-\Phi(\alpha)]} \right| < \infty $$
Added. According to this, there is no global maxima. So it seems that it is not monotone for $\mu$.