efficient algorithm for solving equation $\sum_n a_n/(x - x_n ) = 1$

Here $x_n $ are real and $a_n$ are positive, and we have a finite summation.

The picture is very clear.

But what numerical algorithm is stable and efficient? Supposed $x_n $ are ordered in the increasing order, then between $x_n $ and $x_{n+1}$ there must be a root. One can use the bisection method, but it is slow. Or one can try Newton's method, but it is not necessarily stable.


Solution 1:

I do not know the context of your problem but we worked a lot over years the problem of the solutions of the so-called Underwood equation which appear in shortcut distillation problems. It uses to write $$\sum_{i=1}^n \frac{\alpha_i\, z_i}{\alpha_i- \theta}=1-q$$ where the $\alpha_i> 0$ and $z_i >0$ and $n$ can be very large (potentially up to thousands) and $q$ is given.

In chemical process design, this equation has to be solved zillions of times (optimization problems with hundreds of variables). Because of that, we needed very fast, stable and robust solutions.

Our most recent work was published in $2014$ in this paper (you can also find it here) where we proposed rapid and robust solution methods using convex transformations. Beside, and this is a key point, for any root, we proposed simple and efficient starting guesses which,typically, make that very few iterations are required (this is illustrated in the first figure showing that the starting guess is almost the solution).

I consider that the paper is sufficient clear and detailed (with several examples) for helping you. In the case you have any problem, feel free to contact me (my e-mail address is in my profile).

Edit

If you want something simpler, considering for example (easy to generalize) $$f(x)=\sum_{i=1}^6 \frac{a_i}{x- b_i}-1$$ for the root between $b_1$ and $b_{2}$ consider instead $$g_{(1,2)}(x)=(x-b_1)(x-b_2) f(x)$$ which is $$g_{(1,2)}(x)=a_1 (x-b_2)+a_2 (x-b_1)-(x-b_1) (x-b_2)+$$ $$(x-b_1) (x-b_2) \left(\frac{a_3}{x-b_3}+\frac{a_4}{x-b_4}+\frac{a_5}{x-b_5}+\frac{a_6}{x-b_6}\right)$$ and then $$g_{(1,2)}(b_1)=a_1 (b_1-b_2)\qquad \text{and} \qquad g_{(1,2)}(b_2)=-a_2 (b_1-b_2)$$ and now use for example subroutine rtsafe from "Numerical Recipes" that utilizes a combination of bisection and Newton-Raphson steps (this is required since, between the bounds $b_1$ and $b_2$, function $g_{(1,2)}(x)$ goes through an extremum); the code is here. This works quite well without any convergence problem (but it is much less efficient than what was proposed in our paper). As you can see, the simple idea is just to remove the asymptotes (this is the so-called Leibovici & Neoschil method which has been widely used for this class of problems during the last $26$ years).

You could even restrict the search to the interval $(b_1,x_*)$ or $(x_*,b_2)$ where $x_*=\frac{a_1b_2+a_2b_1}{a_1+a_2}$ obtained by linear interpolation (you just need to check the value of $g_{(1,2)}(x_*)$).

Solution 2:

This looks to me like the secular equation which is used in divide and conquer algorithm for the symmetric eigenvalue problem so it is widely studied and efficient and stable implementation should be available. Here is an overview paper: A numerical comparison of methods for solving secular equations

Here are some slides from Golub for secular equation.