Prove this inequality $f(x_{1})+f(x_{2})+\cdots+f(x_{n})\ge\frac{n^{q+1}}{n^{q-p}-1}$

let $x_{i}>0,i=1,2,\cdots,n$,and such $x_{1}+x_{2}+\cdots+x_{n}=1$,Define function $$f(x)=\dfrac{1}{x^p-x^{q}},p>0,q\ge 1,-1<p-q<0,p,q\in \mathbb{R}$$

show that $$f(x_{1})+f(x_{2})+\cdots+f(x_{n})\ge\dfrac{n^{q+1}}{n^{q-p}-1}$$ My approach is the following:

Since $$f'(x)=-\dfrac{px^{p-1}-qx^{q-1}}{(x^p-x^q)^2},$$ Unfortunatly I can't know the sign of $f′′(x)$ because I want to use Jensen's inequality to prove it.


Solution 1:

Let $g_n(\vec{x}) = \sum_{i=1}^n f(x_i)$.

Define the domain $D = \{ \vec{x} \in (0,1]^n | \sum_{i=1}^n x_i = 1\}$

I claim that $g_n$ has a global minimum for $D$ at $x_i = 1/n$ (for all $i$). Indeed, consider a point $\chi_1=(x_1,\ldots, x_i, \ldots, x_j, \ldots, x_n)$ with $x_i \not = x_j$.

Then it is straightforward to show that $g_n(\chi_1) \ge g_n(x_1,\ldots, \frac{x_i+x_j}{2},\ldots, \frac{x_i+x_j}{2},\ldots, x_n)$. We show this below. So the only point $\chi_0$ in the domain that isn't minimized by another point, is the point in $D$ that isn't changed by averaging any two of its components. This means that all the components of $\chi_0$ are equal to each other and equal to $1/n$. It is a matter of algebra show that $g_n(\chi_0)= n f(1/n) = \frac{n^{q+1}}{n^{q-p}-1}$.

I claim that $f(\alpha - x) + f(x)$ is minimized for $x = \alpha/2$, from which the claim follows. It is just tedious calculus and algebra to show that $\frac{\mathrm{d}}{\mathrm{d}x}\left( f(\alpha - x) + f(x)\right)\bigg|_{x=\alpha/2}=0$ and that $\frac{\mathrm{d}^2}{\mathrm{d}x^2}(f(\alpha-x)+f(x)) \bigg|_{x=\alpha/2} >0.$

Edit #1. I severely underreported the tedium in checking the second derivative test. Replace the above claim with the following.

I claim that for $\alpha \in (0,1)$, $f(\alpha/2 -\delta) + f(\alpha/2+\delta)$ is minimized at $\delta = 0$, from which that main claim follows. Indeed, we have

$$\frac{\mathrm{d}}{\mathrm{d}\delta} (f(\alpha/2-\delta) + f(\alpha/2+\delta))=-\frac{q \left(\frac{\alpha }{2}-\delta \right)^{q-1}-p \left(\frac{\alpha }{2}-\delta \right)^{p-1}}{\left(\left(\frac{\alpha }{2}-\delta \right)^p-\left(\frac{\alpha }{2}-\delta \right)^q\right)^2}-\frac{p \left(\frac{\alpha }{2}+\delta \right)^{p-1}-q \left(\frac{\alpha }{2}+\delta \right)^{q-1}}{\left(\left(\frac{\alpha }{2}+\delta \right)^p-\left(\frac{\alpha }{2}+\delta \right)^q\right)^2}.$$ This can be evaluated at $\delta = 0$ as yielding $0$.

For the second derivative test, we have $$ \begin{align*} \frac{\mathrm{d}^2}{\mathrm{d}\delta^2} &(f(\alpha/2-\delta) + f(\alpha/2+\delta))= &\\ &\frac{2 \left(q \left(\frac{\alpha }{2}-\delta \right)^{q-1}-p \left(\frac{\alpha }{2}-\delta \right)^{p-1}\right)^2}{\left(\left(\frac{\alpha }{2}-\delta \right)^p-\left(\frac{\alpha }{2}-\delta \right)^q\right)^3}-\frac{(p-1) p \left(\frac{\alpha }{2}-\delta \right)^{p-2}-(q-1) q \left(\frac{\alpha }{2}-\delta \right)^{q-2}}{\left(\left(\frac{\alpha }{2}-\delta \right)^p-\left(\frac{\alpha }{2}-\delta \right)^q\right)^2}\\ &-\frac{(p-1) p \left(\frac{\alpha }{2}+\delta \right)^{p-2}-(q-1) q \left(\frac{\alpha }{2}+\delta \right)^{q-2}}{\left(\left(\frac{\alpha }{2}+\delta \right)^p-\left(\frac{\alpha }{2}+\delta \right)^q\right)^2}+\frac{2 \left(p \left(\frac{\alpha }{2}+\delta \right)^{p-1}-q \left(\frac{\alpha }{2}+\delta \right)^{q-1}\right)^2}{\left(\left(\frac{\alpha }{2}+\delta \right)^p-\left(\frac{\alpha }{2}+\delta \right)^q\right)^3} \end{align*} $$

Evaluating this at $\delta = 0$ we have $$ \begin{align*} \frac{\mathrm{d}^2}{\mathrm{d}\delta^2} &(f(\alpha/2-\delta) + f(\alpha/2+\delta))\bigg|_{\delta = 0} = \\ &\frac{2 \left(q \left(\frac{\alpha }{2}\right)^{q-1}-p \left(\frac{\alpha }{2}\right)^{p-1}\right)^2}{\left(\left(\frac{\alpha }{2}\right)^p-\left(\frac{\alpha }{2}\right)^q\right)^3}-\frac{(p-1) p \left(\frac{\alpha }{2}\right)^{p-2}-(q-1) q \left(\frac{\alpha }{2}\right)^{q-2}}{\left(\left(\frac{\alpha }{2}\right)^p-\left(\frac{\alpha }{2}\right)^q\right)^2}\\ &-\frac{(p-1) p \left(\frac{\alpha }{2}\right)^{p-2}-(q-1) q \left(\frac{\alpha }{2}\right)^{q-2}}{\left(\left(\frac{\alpha }{2 }\right)^p-\left(\frac{\alpha }{2}\right)^q\right)^2}+\frac{2 \left(p \left(\frac{\alpha }{2}\right)^{p-1}-q \left(\frac{\alpha }{2}\right)^{q-1}\right)^2}{\left(\left(\frac{\alpha }{2}\right)^p-\left(\frac{\alpha }{2}\right)^q\right)^3}\\ &= T_1+T_2+T_3+T_4 \end{align*} $$

Now, $T_1 = T_4>0$ since their numerators are both the squares of something, and their denominators are both positive. Indeed, we have

$$\left(\frac{\alpha}{2}\right)^p - \left(\frac{\alpha}{2}\right)^q = \left(\frac{\alpha}{2} \right)^p\left(1- \left(\frac{\alpha}{2} \right)^{q-p}\right)>0. $$ Finally, since $T_2=T_3$ has the same sign as it's numerator, we only have to show that $$(1-p)p\left(\frac{\alpha}{2} \right)^{p -2} + (q-1)q\left( \frac{\alpha}{2} \right)^{q-2}>0. $$ This follows immediately because $0<p<1$ and $q>1$. Note: We never use that $-1< p-q$, so it appears that this condition can be relaxed.

Solution 2:

Take $g(\textbf{x})=f(x_1)+f(x_2)+...+f(x_n)$ where $\textbf{x}=(x_1,x_2,...,x_n)$. Then a potential way toward the solution of this problem is through optimization using Lagrangian method. One could consider the following problem $$\min_{\textbf{x}}g(\textbf{x})\hspace{0.5cm}\text{subject to}\hspace{0.5cm} \textbf{x}\cdot \textbf{1}^{T}=1$$ where $\textbf{1}^T=\underbrace{(1,1,...,1)^T}_{n-times}$. First order conditions imply a set of simultaneous $n$ equations given by $$\partial g/\partial x_i=\lambda$$ for $i=1,2,...,n$ and $\lambda$ is a real number otherwise known as the Lagrangian multiplier. By the definition of $g(\textbf{x})$ we must have that $$\partial g/\partial x_i\equiv f'(x_i)=-\frac{px_i^{p-1}-qx_i^{q-1}}{(x_i^p-x_i^q)^2}=\lambda$$ for all $i=1,2,...,n$. By the symmetry of this system of equations it follows that the extreme value is reached when $x_1=x_2=...=x_n$ which under the constraint yields $x_i=1/n$ for all $i=1,2,...,n$. Now to verify that at $\textbf{x}_0=(1/n, 1/n,..., 1/n)$ we have a minimum or maximum we need to check the Hessian matrix $\textbf{H}(g)$ given by second partial derivatives of the objective function $g(\textbf{x})$ and of the constraint evaluated at $\textbf{x}_0$ (you could refer to any text book in calculus, analysis or linear algebra for more details about Hessian matrices). Your problem then reduces to verifying the positive semi-definiteness of the Hessian matrix i.e. $\textbf{u}\cdot\textbf{H}\cdot\textbf{u}^T\geq 0$ for all $\textbf{u}\in\mathbb{R}^n$. If so than at $\textbf{x}_0$ we have a local minimum so $$g(\textbf{x})\geq g(\textbf{x}_0)=n\cdot f(1/n)=n\cdot\frac{1}{n^{-p}-n^{-q}}=\frac{n^{q+1}}{n^{q-p}-1}$$