Why does Ridders' method work as well as it does?
Solution 1:
The Ridders article is very brief, less than 2 pages including graphs and numeric examples. The section on convergence is tiny, leaping from the simple to the conclusion with almost no explanation beyond "after some adequate [but unspecified] approximations" between.
Using his notation, the function under investigation is $F()$. The smaller end of the bracket is $x_0$, the larger end $x_2$, the midpoint $x_1$, and the calculated new value $x_3$. The (absolute) error in the $n$'th iterate is $$e_n = x_n - r$$ where $r$ is the root of $F()$: $F(r) = 0$. $F_n$ is used to denote $F(x_n)$.
So the convergence section starts by noting that, via Taylor expansion around $r$, we have $$F_n \approx e_n f + e_n^2 g + e_n^3 h$$ where $$f = F'(r)$$ $$g = F''(r)/2$$ $$h = F'''(r)/6$$ Then those approximations are plugged into the equation for $x_3$, replacing $F_0$, $F_1$, and $F_2$. Which yields quite a messy expression! After "adequate approximations", the result is revealed: $$e_3 \approx \frac{1}{2} e_0 e_1 e_2 \frac{g^2 - 2 f h}{f^2}$$
But how to get there remains a mystery to me ;-)
In practice, it does seem to be quite robust! The transformation does preserve the zeroes and the signs. While it may not be obvious at first, each iteration at least cuts the bracket width in half, so it can't take more iterations than bisection (although since each iteration requires two function evaluations, it may require more evaluations than bisection). For a linear function, the transfomation effectively multiplies by 1 (i.e., does nothing), so does no harm. If a function is monotonic over a bracket, and convex or concave, the transformation pushes $x_3$ to the correct side of the midpoint to get closer to the root.
Should also note that, contrary to a comment, it's numerically well behaved. For example, while (using Wikipedia's notation): $$f(x_1)^2 - f(x_0) f(x_2)$$ may "look like" it's subtracting two positive numbers, thus potentially suffering massive cancellation, it's actually adding two positive numbers (the function values at the bracket endpoints have opposite signs): full machine precision is being exploited here.
Convergence skepticism
Best I can tell, even if $$e_3 \approx \frac{1}{2} e_0 e_1 e_2 \frac{g^2 - 2 f h}{f^2}$$ is correct (I have no reason to doubt it - I simply can't follow the telegraphic derivation), it doesn't follow that convergence is quadratic.
For example, let's apply the same kind of reasoning to the secant method, where the simplification doesn't require hidden reasoning. $$x_2 = \frac{F(x_1)x_0 - F(x_0)x_1}{F(x_1) - F(x_0)}$$ Reusing notation above, but with the simpler (dropping the 3rd derivative): $$F_n \approx e_n f + e_n^2 g$$ gives $$e_2 = x_2 - r = \frac{F(x_1)x_0 - F(x_0)x_1}{F(x_1) - F(x_0)} - r \approx$$ $$\frac{(e_1 f + e_1^2 g)(r + e_0) - (e_0 f + e_0^2 g)(r + e_1)}{(e_1 f + e_1^2 g) - (e_0 f + e_0^2 g)} - r =$$ $$\frac{e_0 e_1 g}{(e_0 + e_1)g+f}$$ Then we can say $(e_0 + e_1)g$ is insignificant compared to $f$, leaving $$e_2 \approx e_0 e_1 \frac{g}{f}$$ In fact the secant method has order of convergence $\approx 1.62$.
In other 3-point methods where the error is proportional to the product of the prior 3 errors, the order of convergence is $\approx 1.84$ (for example, Muller's method, and other 3-point variants of the secant method). But even that isn't clear here: in the Ridders method, $x_1$ (and so also $e_1$) isn't derived from the main computation, but is merely the average of $x_0$ and $x_2$.
So I just don't believe the "quadratic convergence" claim in general here. It's nevertheless a fine method in practice! In my experience, a high order of convergence rarely matters unless working on arbitrary-precision libraries.
Kicking the tires
Short course: empirically, there's no evidence that the Ridders method enjoys quadratic convergence except in cases it's exceptionally well suited to. Of course someone else should verify this independently before believing it.
I tried some examples under the Ridders method, and under Tiruneh's clever 3-point secant method. The latter has order of convergence $\approx$ 1.84, via an argument that's relatively straightforward to follow.
I used decimal floating-point with 100 digits of precision (Python's decimal
package), because things converge so fast it can be hard to detect patterns in native IEEE double precision.
Roughly speaking, a method has order of convergence $p$ if $$ |e_{n+1}| \approx C |e_n|^p$$ for some constant $C$.
It follows that, given a sequence of 3 consecutive errors $e_0$, $e_1$, and $e_2$, $p$ can be approximated by $$p \approx \log_{\left | e_1/e_0 \right | } \left (\left | \frac{e_2}{e_1} \right | \right )$$
A summary of some "typical" cases:
Ridders did very well on $5^x - 3125$, with root 5, and starting points 0.3 and 7.0. That was close to quadratic, but on the low side. This was expected to do very well, since the Ridders method transforms the function by a kind of exponential smoothing. Tiruneh was 1.84 on the nose.
But Ridders did poorly on the seemingly very easy $x^2 - 4$, with root 2 and starting points 1 and 4. The COC ("computed order of convergence", via the formula above) started at 1.24 and decreased quickly to 1.12, then continued falling more slowly to 1.06, when the root was found. Tiruneh's COC was again $\approx$ 1.84 throughout, and converged much faster.
Another bad Ridders case was $10 x \cdot 3^{-x^2} - 1$, starting with 1 and 2, converging to root $\approx$ 1.586. Ridders again spent most iterations with COC under 1.1, while Tiruneh remained 1.84. Much the same with starting points 0.01 and 0.2, converging to root $\approx$ 0.1011.
On the other hand, Ridders does great on the nightmare $e^{x^2 + 7 x - 30} - 1$, with starting points -10.5 and -9. That looks like a block capital "U", essentially -1 inside (-10, 3), essentially infinity outside it, and briefly hitting 0 at -10 and 3. Tiruneh makes slow progress, fighting for dozens of iterations just to get the first few digits right. Then it zooms in quickly (order 1.84). Ridders is extremely effective, in part because it's good at smoothing out exponential growth, but also because the way in which it (at least) cuts the bracket width in half each time manages to replace both bracket endpoints on each iteration. It enjoys quadratic convergence from the start.
Which reinforces a tradeoff mentioned earlier: there's no guarantee that Tiruneh will converge. Indeed, it's possible (but unlikely) for it to blow up (division by zero) even in simple cases if the function isn't strictly monotonic. But it doesn't require that its initial arguments straddle a root either. You get what you pay for ;-)
So, bottom line for me: there's no reason (neither in theory nor in practice) to believe the Ridders method has quadratic convergence in general. It can show that for functions with exponential growth, but falls to merely low-order superlinear convergence even for simple polynomials. Against that, it does always converge, and in no more iterations than bisection needs.
I expect a proper convergence analysis of the Ridders method would be technical and quite involved. It's really a 2-point method, which creates 2 additional points on each iteration, and may replace both the points the iteration started with. The code is easy enough, but that complicates analysis.
A bit of insight
Note: I'm going on here because I found this question when doing my own search for a decent discussion of the Ridders method. I didn't find one, so I'm making this what I wanted to find ;-)
Why is Ridders so ineffective on, e.g., $x^2 - 4$? For example, is "taking the bend out of the curve" systematically damaging in this case?
No! It's actually quite effective. It's maintaining the guarantee of convergence that hurts.
We started this one with points 1 and 4, at which $f()$ has values -3 and 12. The midpoint is $f(2.5) = 2.25$. $x_3$ is computed as 1.973..., which is really good! It's already got almost 3 correct decimal digits.
That and the midpoint are carried over to the next iteration (among the 4 internal points, the two spanning the narrowest bracket where $f()$ has different signs at the endpoints).
The new midpoint is 2.236... and the calculated value 1.99991... Those two are carried on to the next iteration, and that's what costs: 2.236 is still a rotten approximation, but remains the smallest we've seen where we know $f()$ is positive. Other secant-like methods, which don't guarantee to keep the root bracketed, would carry 1.973... and 1.99991... to the next iteration, vastly increasing the rate of convergence.
And it goes on that way, with the high bracket end falling to 2.0 essentially 1 bit per iteration. Coming in to the final iteration (at which $x_3$ rounds to exactly 2), $e_2 \approx -{10}^{-80}$ (about 80 digits correct), but $e_0 \approx 0.0002$. $x_0 = 2.000057...$ still has only about 5 digits correct. It's acting akin to "regula falsi" this way, keeping one endpoint over & over (although increasing its accuracy about one bit each time).
But then why is, e.g., $e^{x^2 + 7 x - 30} - 1$, with starting points -10.5 and -9, so effective? It turns out that, in that case, successive iterations alternate between overshooting and undershooting the root -10. This gives rapid convergence to the root from both bracket ends. The final iteration (at which $x_3$ rounds to exactly -10) starts with incoming $e_0 \approx -{10}^{-28}$ and $e_2 \approx {10}^{-56}$. The transformation is very well suited to this particular function, and gets 100 decimal digits correct out of those two. But it's more important that both endpoints moved substantially toward the root on the way there.
Ridders without bracketing
Suppose we dropped the feature of guaranteeing to converge: each iteration begins with the two most recent $x_3$ values, regardless of the signs of the function values at those points. So at the end of the current iteration, the incoming $x_2$ becomes $x_0$ for the next iteration, and the current iteration's $x_3$ becomes $x_2$ for the next.
In that case, the "bad" Ridders examples given before all converge at least quadratically. Perhaps that distinct variation is what Ridders had in mind when he wrote "The rate of convergence is shown to be quadratic or better." (which his article does not show).
Assuming his error analysis was correct so far as it went, that's easy enough to show. By the definition of "order of convergence", we have (I'm going to skip absolute value operations below - they add visual noise but scant insight here - just pretend everything is positive): $$e_3 \approx C e_2^p$$ for some constant $p$. We also have $$e_2 \approx C e_0^p$$ Substituting the latter into the former, $$e_3 \approx C ((C e_0)^p)^p = C^{p+1} e_0^{p^2}$$
But from the Ridders analysis, we also have $$e_3 \approx M e_0 e_1 e_2$$ for some constant $M$.
$e_1$ is the average of $e_0$ and $e_2$, so it's better than $e_0$ but worse than $e_2$. So, worst case, say $e_1$ is just as bad as $e_0$. Then $$e_3 \approx M e_0 e_0 e_2 = M e_0^2 (C e_0^p) = M C e_0^{p+2}$$ Equating the two expressions for $e_3$ gives $$C^{p+1} e_0^{p^2} = M C e_0^{p+2}$$ so we can conclude $$C^{p+1} = M C \\ p^2 = p+2$$ $p = 2$ follows from the last.
CAUTION: This explains why I saw quadratic convergence in some cases, but it's not safe in general. For example, without maintaining the guarantee that $F(x_0) \cdot F(x_2) \lt 0$ (i.e., that the function has different signs at the endpoints), there's no guarantee either that the square root needed is real. Indeed, there's not even a guarantee that $x_3$ will be between the incoming $x_0$ and $x_2$.
Solution 2:
Not an answer, just a plot that surprised me:
I thought that Ridders' method was the same as fitting $a + b \, e^{c x}$ to the 3 points.
Not so:
Solution 3:
More in-depth analysis
A very good general answer is already but given, but I'll give some more detailed analysis on the behavior of Ridder's method.
The first notable point is that the error approximation is signed and exact (by replacing $r$ in the derivatives with a point in the interval). This is important, because the sign of the new error is given by the sign of $-Ce_1$, where $e_1$ is the error of the bisection step and $C$ is all of the approximately constant terms lumped together. If $C$ is negative, then the sign of the new error is the same as $e_1$, which will drag in the furthest endpoint on every iteration, which will cause fast convergence. If $C$ is positive, then the sign of the error is opposite of $e_1$, so one endpoint converges using only bisection while the other bound converges roughly linearly (the digits accurate will be quadratic in the amount of iterations).
In the event that $C<0$, then it can furthermore be found that quadratic convergence does indeed happen. It is worth noting, however, that the Ostrowski index is only $\sqrt2\approx1.44$, since every iteration is actually composed of two steps.
One can see Tim Peter's examples for cases where Ridder's method converges slowly all satisfy $C>0$. In particular, it is easy to see that Ridder's method converges slowly for any quadratic function, for example.
Other 3-point methods will generally outperform Ridder's method, giving higher Ostrowski indexes, even in the worst cases, and converging faster in pathological cases (such as high order roots). Even if one considers the order of convergence and neglects the Ostrowski index, other methods converge much faster in their worst cases compared to the staggeringly slow case with Ridder's method. Generally Dekker's, Brent's, and Chandrupatla's methods are much better, to name a few.
Intuitively, these methods perform much better because the third point used is much closer to the root than the result of bisection. Using bisection, however, leads to a much simpler formula, due to cancellations. If one were to use the same approach as these hybrid methods, however, much faster worst case convergence could be attained.