Bhaskara's sine approximation looks a bit like a Padé approximant and is eerily accurate.

Another option---since you need to do this trillions of times---is to save the solutions for many values of $a$, and then look up/interpolate solutions as you need them.


We have to solve the equation ${\rm sinc}(x)=a$ for $x\in[0,\pi]$ in terms of the parameter $a\in[0,1]$. As Kirill has remarked, for $a\to1\!-\ $ we obtain $x\doteq\sqrt{6(1-a)}$. Therefore, if we want a single simple expression giving accurate values over the whole $a$-interval $[0,1]$ we have to take a square root at the end.

For this reason we put $x:=\sqrt{u}$ and solve the equation $$\bigl(s(u):=\bigr)\quad {\rm sinc}\bigl(\sqrt{u}\bigr)=a\tag{1}$$ for $u$ in terms of $a$. Let $$a\mapsto u:=f(a)\qquad(0\leq a\leq1)$$ be the solution of $(1)$, i.e. the inverse function of $s$. From known values of $s$ and $s'$ we can, e.g., deduce the values $$f(0)=\pi^2,\quad f\left({2\over\pi}\right)={\pi^2\over4},\quad f\left({\sqrt{27}\over 4\pi}\right)={4\pi^2\over9},\quad f(1)=0\ ,\tag{2}$$ and $$f'(0)=-2\pi^2\tag{3}$$ (one could take more such values into account and determine more coefficients $c_k$, $d_k$ in the following).

We now make the "Ansatz" $$\tilde f(a):={p(a)\over q(a)},\qquad p(a):=\pi^2+c_1 a+ c_2 a^2,\quad q(a):=1+d_1a +d_2 a^2\ ,$$ and determine the coefficients $c_1$, $c_2$, $d_1$, $d_2$ such that $(2)$ and $(3)$ are satisfied. This leads to the approximation $$x=g(a):=\sqrt{\tilde f(a)}=\sqrt{{\pi^2+5.95839 a - 15.828 a^2\over 1 + 2.60371 a + 0.690687 a^2}}\ .$$ The following figure shows a plot of $a\mapsto \sin\bigl(g(a)\bigr)-a\> g(a)$:

enter image description here


Hi intersting question usimg hermite polynomial $$\text{Solve}\left[\sum _{n=0}^3 \frac{\left(i (-1)^{2 n} i^n 2^{-n-1} \left((-1)^n-1\right)\right) H_n(x)}{\sqrt[4]{e} n!}=a x,x\right]$$ you could get $$\left\{\{x\to 0\},\left\{x\to -\sqrt{\frac{3}{2}} \sqrt{5-4 \sqrt[4]{e} a}\right\},\left\{x\to \sqrt{\frac{3}{2}} \sqrt{5-4 \sqrt[4]{e} a}\right\}\right\}$$ give arounf three digits of precistion


Here is a way to solve this equation quickly that I think is more efficient that the methods proposed so far. The reason I think this is more efficient is that allows you to get about 15 digits with some multiplications (which are cheap), one square root, and at most one sin-cos evaluation (which are expensive).

Your equation reminds me of the Kepler equation, that I once tried to analyse, and used a similar approach.

Instead of approximating $\sin x$ by some algebraic function, and then solving the approximate equation, let's approximate the true solution instead. Denote the solution $y(a)$, $0<y(a)<\pi$, $0<a<1$, of the equation $$ \sin y(a)-a y(a) = 0, $$ then $y(a)$ has a root, $y(1)=0$, and it behaves at $a\approx1$ as $$ y(a) \sim \sqrt{6}(1-a)^{1/2}. $$ Here is the graph of $y(a)/\sqrt{1-a}$:

enter image description here

Now the red function $y(a)/\sqrt{1-a}$ is smooth, non-singular, has no roots, so we can approximate it with a Chebyshev series $\sum_{k\geq0}c_k T_k(x)-\frac12 c_0$, with these coefficients:

[5.468893253088218, -0.3315308418632746, 0.05711908619285262, -0.0133386391330606, 0.003604590132654065, -0.001061080932244192, 0.0003303619584939474, -0.0001070040549514469, 3.56907424708586e-5, -1.217702854008323e-5, 4.22996639149332e-6, -1.491025149248465e-6, 5.319859653014493e-7, -1.917577069128536e-7, 6.972596591016018e-8, -2.554518680989035e-8, 9.420605435988877e-9, -3.494307402618534e-9, 1.302781037141137e-9, -4.879450499229215e-10, 1.835088955571608e-10, -6.927187870873964e-11, 2.62374324329517e-11, -9.968294055829801e-12, 3.797891128370018e-12, -1.450731732722623e-12, 5.554781896411238e-13, -2.131593799567149e-13, 8.196531167958859e-14, -3.15778010597506e-14, 1.218720260971691e-14, -4.711359031839824e-15, 1.824159314975387e-15, -7.073133344264407e-16, 2.746327946190114e-16, -1.067647609986269e-16, 4.153961510615797e-17, -1.613912600904692e-17, 6.171147771603768e-18, -2.090308692683401e-18]

Depending on how much accuracy you need, it might be enough to sum the first 34 terms using the Clenshaw recurrence (this takes 33 multiplications, and uses no expensive operations, like divisions). Here is the plot of relative error for successive Chebyshev approximations to $y(a)/\sqrt{1-a}$:

enter image description here

Depending on the relative cost of trigonometric operations on your machine (this should be established experimentally), it may be faster to take just enough Chebyshev terms that one Newton iteration suffices to get accuracy to machine precision. Here is 8 terms followed by one Newton iteration (blue) or one Halley iteration (red). The only expensive operation here would the simultaneous evaluation of sin and cos for the function and its derivative:

enter image description here

The cost of this is 7 multiplications for the Chebyshev series, and 4 multiplications, 2 divisions and one sin-cos for the Halley iteration, and one square root and a multiplication to get back $y(a)$.