Is there an analytic approximation to the minimum function?

I am looking for an analytic function that approximates the minimum function. i.e.,

$|f(x_1,x_2) - \min(x_1,x_2)| < \zeta$ for some $\zeta$ that may be related to $|x_1 - x_2|$. Or may be a series $f_1,f_2,....$, where $\lim_{n \to \infty} \zeta = 0$.


Solution 1:

Following Raskolnikov's hint it is enough to produce an analytic approximation to the absolute value function. Here the functions $f_\epsilon(x):=\sqrt{x^2+\epsilon^2}$ come handy; they differ from $|x|$ by at most $\epsilon$ on all of ${\mathbb R}$. If $|x|$ is bounded a priori, say $|x|\leq 1$, then you even can replace the $f_\epsilon$ by suitable polynomials $q_\epsilon(x^2)$.

Solution 2:

Note that you can easily switch between max and min as $\min(x, y) = - \max(-x, -y)$.

In various computer graphics applications, or in optimization problems, there are various "smooth" or "soft" maximum and minimum functions. In the graphics context they are used purely for aesthetics to eliminate ugly discontinuities. In the optimization problems they're used to give a nicer related function that the optimizers handle better than the true function.

One such common softmax is $\log(\exp(x) + \exp(y))$. Rescaling by $k$ sets a scale for the fuzziness: $\log(\exp(kx) + \exp(ky))/k$. This has the nice property that it approaches the larger of $x$ and $y$ when they are greatly different, but has the bad property that it is off by $\log 2$ when they are equal.

The maximum is also the $p = \infty$ limit of the generalized mean/power means. Conversely, the $p = -\infty$ limit is the minimum.

(There is also a softmax activation function which turns numbers into weights of the various choices. It's really a soft selection of the maximum, so is perhaps misnamed. This is not what you want though it is related -- using the weights for a sum of inputs gives something reasonable.)

Solution 3:

The only function that I know has this property is essentially

$$\max(x, y) \sim \frac{x e^{kx} + y e^{ky}}{e^{kx} + e^{ky}}$$

for large $k$.

(@wnoise was close to the answer. In this case there is no problem if $x=y$.)

This works for positive or negative numbers and tends to the maximum for $k\to\infty$. Other formulas based on powers are only valid for positive numbers (as $\sqrt[n]{x^n + y^n}$).

The integral counterpart is

$$ \frac{\int{f(x) e^{kf(x)} dx}}{\int{e^{kf(x)} dx}} $$

There is a small caveat if you want to implement this numerically you need to have an estimate of the maximum in the first place (and integrate over shifted values), otherwise the the integral (or even the sum) is going to under/overflow pretty easily.

This is related to the Laplace's method. Note that I asked a related question:

Real approximation to the maximum using Laplace's method integral

Solution 4:

You can smooth out any function by taking the convolution with a smooth bump function. As the support of the bump function shrinks to zero, the sequence of smoothed versions will converge to your original function.

Added: To be more specific, a Gaussian convolution with $|x|$ gives
$$|x|\approx x\ \mbox{erf}(x/\sqrt{2}\sigma)+\sqrt{2\over \pi}\sigma \exp(-x^2/2 \sigma^2)$$ as a smooth approximation to the absolute value function. If you prefer a function that vanishes at zero, take $$|x|\approx x\ \mbox{erf}(x/\sqrt{2}\sigma)+\sqrt{2\over \pi}\sigma \left[\exp(-x^2/2 \sigma^2)-1\right].$$ Here "erf" refers to the error function. Letting $\sigma$ approach zero gives better approximations.

For that matter, we could just use $$ |x|\approx x\ \mbox{erf}(x/\sigma).$$