A very general method for proving inequalities. Too good to be true?
Actually, this is one approach to doing inequalities, and sometimes it is used. But there are some caveats that need to be paid attention to:
-
It is not always the case that those multi-variable functions are differentiable at all points that you are interested in. (And in some cases, the function is differentiable a.e., but the function is so bizarre that its derivative does not contain too much information about the function's variation - this is rare though, usually rises theoretically but not in practical use cases.)
-
Even if 1 is not an issue, sometimes it is not easy to get the differentiation of the function - it could be tricky to calculate differential values for certain points, or the derivative is very involving for the function expressed in nested combinations of elementary operations/functions, or the derivative may not be in closed-form if the function is not elementary, etc. (Plus if the differentiation is in a complicated form, it is also difficult to solve the equations of which derivatives equal to zero.)
-
Even if 2 is not an issue, it is not guaranteed that the point you get by having first-order partial derivatives equaling to zero is local minimum/maximum, you might just get a saddle point - thus second-order derivatives need to be checked as well (notice also there is no guarantee that second-order derivatives exist)
-
Even if 3 is not an issue, you still cannot be sure that the local maximum/minimum is global maximum/minimum - e.g. it could be possible that points at the boundary are larger than local maximum point you get. So check more before you make the final conclusion - For example, when finding the global maximum:
- Compare all local maximum points (if there is any, and make sure they are local max);
- Compare all points at the boundary (if there is any);
- Check function's behavior as it approaches infinity (if applicable);
- Check where first-order derivatives do not exist (if any) - they could be maximum points;
- and so on..
The answer by @yujie-zha misses a few important points, so let me elaborate on this.
Congratulations, you rediscovered a well-known theorem which is sometimes called in France “principe de Fermat,” sharing the same name than a reflection law in optics and the remark according to which there is no strictly decreasing sequence in positive numbers. Pierre de Fermat was a French mathematician living in the 17th century.
Your discussion covers a lot of useful cases but it has two major flaws:
A point where the quantity $F = f - g$ reaches its minimum does not need to exist. A simple example is provided by $1/x$ for $x\gt0$.
If such a point exists, the geometry of the set where the minimum of $F$ is realised can be arbitrarily complicated, so even if it has infinitely many points, in does not need to be so nice that you actually can remove a variable. Functions $F$ which allow to remove variables are called submersions. Furthermore, finding that point can be impracticable.
Regarding 1, a common hypothesis guaranteeing that the minimum is actually realised is when the function $F$ we study is continuous and defined over a compact domain. This might not tell you much for now, but you can realise what happen if you try to build a function always positive on the real line but whose minimum “flees to the infinity” – a colloquial way to say that it's asymptotically zero, if you are already familiar with these terms.
Regarding 2, a theorem by Borel states that any closed subset of an euclidean space is the zero locus of a smooth function. This will also probably not tell you much, but you can try to find functions $F$ that are positive, everywhere differentiable over the real line and whose zero locus is:
- The set of integers.
- The set of inverse of integers, to which we add the point $0$.
- The segment $[-1,1]$.
- The Cantor set (if you know this one).
You can also look for similar examples in higher dimensions.
This aside, the method you suggest actually works, but happens to be not as easy to use as you seem to think. Other common sources for inequality are convex functions, esp. Jensen's inequality, the Cauchy-Schwarz or Cauchy-Bouniakovski inequality, and the modulus of a holomorphic function.