How approximation search works
[Prologue]
This Q&A is meant to explain more clearly the inner working of my approximations search class which I first published here
- Increasing accuracy of solution of transcendental equation
I was requested for more detailed info about this few times already (for various reasons) so I decided to write Q&A style topic about this which I can easily reference in the future and do not need to explain it over and over again.
[Question]
How to approximate values/parameters in Real domain (double
) to achieve fitting of polynomials,parametric functions or solve (difficult) equations (like transcendental) ?
Restrictions
- real domain (
double
precision) - C++ language
- configurable precision of approximation
- known interval for search
- fitted value/parameter is not strictly monotonic or not function at all
Solution 1:
Approximation search
This is analogy to binary search but without its restrictions that searched function/value/parameter must be strictly monotonic function while sharing the O(log(n))
complexity.
For example Let assume following problem
We have known function y=f(x)
and want to find x0
such that y0=f(x0)
. This can be basically done by inverse function to f
but there are many functions that we do not know how to compute inverse to it. So how to compute this in such case?
knowns
-
y=f(x)
- input function -
y0
- wanted pointy
value -
a0,a1
- solutionx
interval range
Unknowns
-
x0
- wanted pointx
value must be in rangex0=<a0,a1>
Algorithm
-
probe some points
x(i)=<a0,a1>
evenly dispersed along the range with some stepda
So for example
x(i)=a0+i*da
wherei={ 0,1,2,3... }
-
for each
x(i)
compute the distance/erroree
of they=f(x(i))
This can be computed for example like this:
ee=fabs(f(x(i))-y0)
but any other metrics can be used too. -
remember point
aa=x(i)
with minimal distance/erroree
-
stop when
x(i)>a1
-
recursively increase accuracy
so first restrict the range to search only around found solution for example:
a0'=aa-da; a1'=aa+da;
then increase precision of search by lowering search step:
da'=0.1*da;
if
da'
is not too small or if max recursions count is not reached then go to #1 -
found solution is in
aa
This is what I have in mind:
On the left side is the initial search illustrated (bullets #1,#2,#3,#4). On the right side next recursive search (bullet #5). This will recursively loop until desired accuracy is reached (number of recursions). Each recursion increase the accuracy 10
times (0.1*da
). The gray vertical lines represent probed x(i)
points.
Here the C++ source code for this:
//---------------------------------------------------------------------------
//--- approx ver: 1.01 ------------------------------------------------------
//---------------------------------------------------------------------------
#ifndef _approx_h
#define _approx_h
#include <math.h>
//---------------------------------------------------------------------------
class approx
{
public:
double a,aa,a0,a1,da,*e,e0;
int i,n;
bool done,stop;
approx() { a=0.0; aa=0.0; a0=0.0; a1=1.0; da=0.1; e=NULL; e0=NULL; i=0; n=5; done=true; }
approx(approx& a) { *this=a; }
~approx() {}
approx* operator = (const approx *a) { *this=*a; return this; }
//approx* operator = (const approx &a) { ...copy... return this; }
void init(double _a0,double _a1,double _da,int _n,double *_e)
{
if (_a0<=_a1) { a0=_a0; a1=_a1; }
else { a0=_a1; a1=_a0; }
da=fabs(_da);
n =_n ;
e =_e ;
e0=-1.0;
i=0; a=a0; aa=a0;
done=false; stop=false;
}
void step()
{
if ((e0<0.0)||(e0>*e)) { e0=*e; aa=a; } // better solution
if (stop) // increase accuracy
{
i++; if (i>=n) { done=true; a=aa; return; } // final solution
a0=aa-fabs(da);
a1=aa+fabs(da);
a=a0; da*=0.1;
a0+=da; a1-=da;
stop=false;
}
else{
a+=da; if (a>a1) { a=a1; stop=true; } // next point
}
}
};
//---------------------------------------------------------------------------
#endif
//---------------------------------------------------------------------------
This is how to use it:
approx aa;
double ee,x,y,x0,y0=here_your_known_value;
// a0, a1, da,n, ee
for (aa.init(0.0,10.0,0.1,6,&ee); !aa.done; aa.step())
{
x = aa.a; // this is x(i)
y = f(x) // here compute the y value for whatever you want to fit
ee = fabs(y-y0); // compute error of solution for the approximation search
}
in the rem above for (aa.init(...
are the operand named. The a0,a1
is the interval on which the x(i)
is probed, da
is initial step between x(i)
and n
is the number of recursions. so if n=6
and da=0.1
the final max error of x
fit will be ~0.1/10^6=0.0000001
. The &ee
is pointer to variable where the actual error will be computed. I choose pointer so there are not collisions when nesting this and also for speed as passing parameter to heavily used function creates heap trashing.
[notes]
This approximation search can be nested to any dimensionality (but of coarse you need to be careful about the speed) see some examples
- Approximation of n points to the curve with the best fit
- Curve fitting with y points on repeated x positions (Galaxy Spiral arms)
- Increasing accuracy of solution of transcendental equation
- Find Minimum area ellipse enclosing a set of points in c++
- 2D TDoA Time Difference of Arrival
- 3D TDoA Time Difference of Arrival
In case of non-function fit and the need of getting "all" the solutions you can use recursive subdivision of search interval after solution found to check for another solution. See example:
- Given an X co-ordinate, how do I calculate the Y co-ordinate for a point so that it rests on a Bezier Curve
What you should be aware of?
you have to carefully choose the search interval <a0,a1>
so it contains the solution but is not too wide (or it would be slow). Also initial step da
is very important if it is too big you can miss local min/max solutions or if too small the thing will got too slow (especially for nested multidimensional fits).
Solution 2:
a combination of secant (with bracketing, but see correction at the bottom) and bisection method is much better:
we find root approximations by secants, and keep the root bracketed as in bisection.
always keep the two edges of the interval so that the delta at one edge is negative, and at the other it is positive, so the root is guaranteed to be inside; and instead of halving, use the secant method.
Pseudocode:
given a function f
given two points a, b, such that a < b and sign(f(a)) /= sign(f(b))
given tolerance tol
find root z of f such that abs(f(z)) < tol -- stop_condition
DO:
x = root of f by linear interpolation of f between a and b
m = midpoint between a and b
if stop_condition holds at x or m, set z and STOP
[a,b] := [a,x,m,b].sort.choose_shortest_interval_with_
_opposite_signs_at_its_ends
This obviously halves the interval [a,b]
, or does even better, at each iteration; so unless the function is extremely bad behaving (like, say, sin(1/x)
near x=0
), this will converge very quickly, taking only two evaluations of f
at the most, for each iteration step.
And we can detect the bad behaving cases by checking that b-a
not becomes too small (esp. if we're working with finite precision, as in doubles).
update: apparently this is actually double false position method, which is secant with bracketing, as described by the pseudocode above. Augmenting it by the middle point as in bisection ensures convergence even in the most pathological cases.