Searching for tighter bounds

We are looking for a root of $$f(w) = \sum_{i=1}^{n}\frac{(K_i-1)z_i}{1+(K_i-1)w}.$$ For the sake of brevity, we put $A_i=\frac{1}{K_i-1}$ and assume $A_1<\ldots<A_n$. $$f(w) = \sum_{i=1}^{n}\frac{z_i}{A_i+w}$$ changes its sign in every $w_i=-A_i$ and $\lim_{w\to\infty}f(w)=0$, so it has a root (and only one) in every interval $(-A_{j+1},-A_j)$. Moreover, the roots of $f(w)$ are the same of the roots of the $(n-1)$-th degree polynomial $$p(w) = \sum_{i=1}^{n}z_i\prod_{k\neq i}(w+A_k), $$ so we can find our $w$ by looking for the only root of $p(w)$ in $(-A_{j+1},-A_j)$. Many computational methods can accomplish this task: bisection, Newton's, secants or a suitable combination of these ones.


Update. Since the $A_i$s can be only positive or less than $-1$, let $I^{+}$ be the set of $i$ such that $A_i>0$ and $I^{-}$ be the set of $i$ such that $A_i<0$. Moreover, let $a=-\frac{1}{K_{max}-1}<0$ and $b=\frac{1}{1-K_{min}}>1$. We are looking for the only solution in $(a,b)$ of: $$L(x)=-\sum_{i\in I^{-}}\frac{z_i}{x+A_i}=\sum_{i\in I^+}\frac{z_i}{x+A_i}=R(x),$$ where $L(x)$ is a convex increasing function and $R(x)$ is a convex decreasing function on $(a,b)$. We can perform a few bisection steps in order to ensure that $R(x)$ and $L(x)$ are intersecting convex, monotonic and bounded functions over $[c,d]\subset(a,b)$. The problem (*) is now to find the point of intersection between the graphics of two bounded convex functions, with opposite monotonicity, over $[c,d]$. For this kind of problem, we can tweak the secant-tangent method in order to ensure fast convergence, due to the following simple geometric fact: Intersecting the graphics of two convex bounded function with opposite monotonicity

The point of intersection $P=(p_x,p_y)$ belong to the interior part of the triangle having two tangents and one secant of $R(x)$ as sides. The same holds for $L(x)$, so by intersecting the $L(x)$-secant with the $R(x)$-tangent in $d$, and the $R(x)$-secant with the $L(x)$-tangent in $d$, we can find a sub-interval $[c',d']\subset[c,d]$ for which $p_x\in[c',d']$. Moreover, one between $c'$ and $d'$ can be replaced with the abscissa of the intersection between the $L(x)$-secant and the $R(x)$-secant, as clearly depicted above. (Other choices are the abscissa of the intersection of the tangents in $c$, or in $d\ldots$)

To avoid the first bisection steps, we can multiply both $R(x)$ and $L(x)$ by $(x-a)(x-b)$ in order to remove the singularities in the endpoints. Convexity is preserved, monotonicity may be not, but the intersection point still is unique, and a good approximation for it seems to be the intersection of the $L(x)(x-a)(x-b)$-tangent in $b$ with the $R(x)(x-a)(x-b)$-tangent in $a$. Since:

$$L(x)(x-a)(x-b)= -\sum_{i\in I^-,\,A_i\neq -b}\frac{z_i(x-b)(x-a)}{x+A_i}-z_b(x-a),$$ $$R(x)(x-a)(x-b)= \sum_{i\in I^+,\,A_i\neq -a}\frac{z_i(x-b)(x-a)}{x+A_i}+z_a(x-b),$$ The approximation we get this way is the solution of $$z_b+(x-b)\sum_{i\in I^-,\,A_i\neq -b}\frac{z_i}{b+A_i}=z_a+(x-a)\sum_{i\in I^+,\,A_i\neq -a}\frac{z_i}{a+A_i},$$ i.e. $$\tilde{w}=\frac{(z_a-z_b)+\sum_{i\in I^-,\,A_i\neq -b}\frac{b\,z_i}{b+A_i}-\sum_{i\in I^+,\,A_i\neq -a}\frac{a\,z_i}{a+A_i}}{\sum_{i\in I^-,\,A_i\neq -b}\frac{z_i}{b+A_i}-\sum_{i\in I^+,\,A_i\neq -a}\frac{z_i}{a+A_i}}.$$ If now $R(\tilde{w})-L(\tilde{w})$ is positive we replace $[a,b]$ with $[\tilde{w},b]$, otherwise we replace $[a,b]$ with $[a,\tilde{w}]$ and iterate the argument. After two steps, both the endpoints of the original interval are replaced, so we do not need to "normalize" $L(x)$ and $R(x)$ anymore, we are back to (*) and formulas, if we want, return to be good-looking and simple.

Another idea is to approximate $L(x)$ and $R(x)$ (or their regularized version) with homographic functions (Padé approximants) with a 3-points $\left(c,d,\frac{c+d}{2}\right)$ fit (with respect to four parameters), then solve $\tilde{L}(x)=\tilde{R}(x)$. This may or may not have stability issues, I need to investigate further.

Another important fact that I believe exploitable is that $L(x)$ and $R(x)$ are much more than convex on $(a,b)$. $L(x)$ is an analytic function with non-negative derivatives of any order, $R(x)$ is an analytic function with non-negative derivatives of even order and non-positive derivatives of odd order over $(a,b)$. The same holds for $1/R(x)$ and $1/L(x)$, that are also non-negative and bounded over $[a,b]$. The intersection of tangents in the endpoints gives the approximation: $$\tilde{w} = \frac{R(a)L(b)(R(a)-L(b))+(bL'(b)R^2(a)-aR'(a)L^2(b))}{R^2(a)L'(b)-L^2(b)R'(a)}.$$


This is an old idea, took from simultaneous roots approximation algorithms, but may be really effective. Compose $f$ with a linear map such that $[a,b]=[-1,1]$. Consider the map $h$ that sends $z$ to $\frac{1}{2}\left(z+\frac{1}{z}\right)$. For any $r>0$, $h^{(n)}(r)$ converges pretty fast to $1$, and for any $r<0$, $h^{(n)}(r)$ converges pretty fast to $-1$. Moreover, $h$ is easy to invert. So, by composing with $h$ or its inverse function, we have the choice to "condensate" the singularities of $f,L,R$ near the endpoints of $[-1,1]$ or to "push them away". In both cases, it is very likely to have the possibility to provide extremely tight bounds for $L(x)$ and $R(x)$, so for $\tilde{w}$.