Is there a special name for the function $y (x)$ given by the equation $$ y-\sin (y)=x? $$
What is the simplest way to compute the value of the function for a given argument?
Is there a special name for the function $y (x)$ given by the equation $$ y-\sin (y)=x? $$
What is the simplest way to compute the value of the function for a given argument?
On
This is an interesting question for people that haven't taken a numerical analysis course, so I will try to make a small intro to you, by explaining one of the most common methods. I will go over the "computing the value of the function for a given argument" part.
I guess that you name as a "given argument", a random value of $x$ and you want to compute $y$.
Let's suppose first off all, that $x=x_0 \in \mathbb R$. Then, let's rewrite the given equation :
$$y-\sin(y)=x_0$$
A method to solve this equation for $y$, is the widely known Newton's Method or Newton-Raphson Method.
The idea of the method is as follows:
One starts with an initial guess which is reasonably close to the true root, then the function is approximated by its tangent line (which can be computed using the tools of calculus), and one computes the $x$-intercept of this tangent line (which is easily done with elementary algebra). This x-intercept will typically be a better approximation to the function's root than the original guess, and the method can be iterated. Here's a graph to see the methods geometrical part.
Suppose $f : [a, b] → ℝ$ is a differentiable function defined on the interval $[a, b]$ with values in the real numbers $ℝ$. The formula for converging on the root can be easily derived. Suppose we have some current approximation $x_n$. Then we can derive the formula for a better approximation, $x_{n + 1}$ by referring to the diagram. The equation of the tangent line to the curve $y = f (x)$ at the point $x = x_n$ is :
$$y=f'(x_n)(x-x_n)+f(x_n)$$
where $f'$ denotes the derivative of the function $f$.
The $x$-intercept of this line (the value of $x$ such that $y=0$) is then used as the next approximation to the root, $x_{n + 1}$. In other words, setting $y$ to zero and $x$ to $x_{n + 1}$ gives :
$$0 = f'(x_n)(x_{n+1}-x_n)+f(x_n)$$
and finally solving for $x_{n+1}$ gives :
$$x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}$$
We start the process off with some arbitrary initial value $x_0$. (The closer to the zero, the better. But, in the absence of any intuition about where the zero might lie, a "guess and check" method might narrow the possibilities to a reasonably small interval by appealing to the intermediate value theorem.) The method will usually converge, provided this initial guess is close enough to the unknown zero, and that $f'(x_0) ≠ 0$. Furthermore, for a zero of multiplicity 1, the convergence is at least quadratic (see rate of convergence) in a neighbourhood of the zero, which intuitively means that the number of correct digits roughly at least doubles in every step. More details can be found in the analysis section below.
Having stated all these, let's fix the method for $x_0 \in \mathbb R$ :
Let : $f(y) = y-\sin(y) - x_0$. Then, the derivative will be : $f'(y)=1+\cos(y)$.
Fetching our equation for the repeatable process, yields :
$$y_{n+1} = y_n - \frac{f(y_n)}{f'(y_n)} \Rightarrow y_{n+1} = y_n - \frac{y_n - \sin(y_n) - x_0}{1+\cos(y_n)}$$
Now, depending on the arbitrary $x_0 \in \mathbb R$ and your starting approximation $y_0$, you can proceed with getting a value for the given argument. This means that in some cases, the method fails to work (there are many reasons for this, such as falling to an infinite loop or wrong conclusions, which you can google to find out) and there are checks that are carried out to see if the method will converge. Since there are many other methods (variants), one shall choose the correct one, regarding the initial problem every time. I cannot go through explaining every different case though, as this is exactly the course of Numerical Analysis I,II,$\dots$
What you are looking at is the inverse of the function $f(x) = x - \sin(x)$. Since this function is strictly increasing we can use a simple binary search to compute its inverse.
It's fairly easy to see that $x - 1 \leq f^{-1}(x) \leq x + 1$. So we initialize our binary search range to $[x-1, x+1]$. We then repeatedly evaluate the middle point of our range, and check if it's bigger or smaller than our original input, and respectively replace the higher or lower portion of our range. We keep doing this until our range is smaller than a certain precision we want.
In Python: