How to solve $x+\sin(x)=b$

How can I solve $x+\sin(x)=b$ for $x \in [0,π]$?

We take $b \in [0,π]$. I don't know how to find the solution.


You can't actually solve it, but you can approximate.

Here is the expansion for $\require{cancel}\sin(x)$ around $x=\pi/2$:

$$\sin(x)=1-\frac{(x-\frac\pi2)^2}{1\times2}+\frac{(x-\frac\pi2)^4}{1\times2\times3\times4}-\dots$$

To give a crude estimate,

$$\sin(x)\approx1-\frac{(x-\frac\pi2)^2}2$$

So let us solve the approximate problem:

$$x+\left(1-\frac{(x-\frac\pi2)^2}2\right)=b$$

$$\frac12x^2-\left(1+\frac\pi2\right)x+\frac{\pi^2}4-1+b=0\tag{expand}$$

$$x\approx1+\frac\pi2\pm\sqrt{3+\pi-2b}\tag{Quadratic Formula}$$

Given some guessing, you could determine if the $+\sqrt{}$ or the $-\sqrt{}$ is the one you are looking for.

Use longer expansions of sine for better approximations.

For example, if $b=1$, then

$$x\approx1+\frac\pi2\pm\sqrt{1+\pi}=\begin{cases}0.5357\\\cancel{4.6059}\end{cases}$$

See that $x\cancel\approx4.6059$, and so the other root is the correct approximation. Plugging it in gives $1.0462$, a decent approximation to start with.


You could also use numerical methods like Newton's method, as mentioned above in the comments.

Applying Newton's method gives the following algorithm:

$$x_{n+1}=x_n-\frac{x_n+\sin(x_n)-b}{1+\cos(x_n)}$$

For $b=1$ and initial guess being $x_0=0.5357$, we get

$$\begin{align} x_0 & =0.5357 \\ x_1 & =0.5109 \\ x_2 & =0.5110 \\ x_3 & =0.510973429\dots \end{align}$$

Indeed, taking that final value for $x$ gives $x_3+\sin(x_3)=1$ for as many digits my calculator takes.


Here is an approach, if you need a numerical approximation.

The function $x\to x+\sin x$ is continuous and increasing on $[0,\pi]$, hence it defines a bijection. Let $f(y)$ be the unique solution on $[0,\pi]$ of the equation $y=x+\sin x$, for $y\in[0,\pi]$.

Here is a plot of $x+\sin x$:

enter image description here

You may use series reversion around a point to find an approximation of the solution. Either it's enough, or you can use it to get the initial value of an iterative (Newton's method or Halley's method for instance).

With the help of a computer algebra system (here, Maxima) you get, around $y=0$,

$$f(y)=\frac{1}{2}y+\frac{1}{96}y^3+\frac{1}{1920}y^5+\frac{43}{1290240}y^7+\frac{223}{92897280}y^9+\frac{60623}{326998425600}y^{11}+\frac{764783}{51011754393600}y^{13}+\frac{107351407}{85699747381248000}y^{15}+O(y^{17})$$

Around $\pi$, the function is flat so it's less simple, but you get

$$f(y)=\pi-g\left(6^{1/3}(\pi-y)^{1/3}\right)$$

With

$$g(t)=t+\frac{1}{60}t^{3}+\frac{1}{1400}t^{5}+\frac{1}{25200}t^{7}+\frac{43}{17248000}t^{9}+\frac{1213}{7207200000}t^{11}+\frac{151439}{12713500800000}t^{13}+\frac{33227}{38118080000000}t^{15}+\frac{16542537833}{252957982717440000000}t^{17}+\frac{887278009}{177399104762880000000}t^{19}+O(t^{21})$$

With this, you get a absolute error less than $2\cdot10^{-6}$ for any $y$ between $0$ and $\pi$, if you take the first approximation for $y<1.73714$, and the second approximation for $y\geq1.73714$.

With one step of Newton's method starting from this value, you should get a precision of the order of $10^{-12}$.


Of course, there are other ways, this is only a simple and relatively naive approach. You could split the interval $[0,\pi]$ in more than two intervals (using other expansions of the solution), and you could find optimal polynomial approximation, or rational approximation (the latter is probably what would be used in a more robust implementation). It's better, because instead of having a good approximation around one point, getting worse and worse as you move away from it, you obtain instead an approximation that has almost the same precision everywhere in some range.