I came across an exercise about the linearization of this non-linear equation arount the operational points $x^{\circ}=0, y^{\circ}=0$:
$$y=a\ddot{x}+b\sin x$$
The process started by:
$$x=x^{\circ}+\Delta x,\; \dot{x}=\dot{x}^{\circ}+\Delta \dot{x},\; \ddot{x}=\ddot{x}^{\circ}+\Delta \ddot{x}$$
and also
$$y=y^{\circ}+\Delta y$$
Then those relations were substituted in the initial equation and by using Taylor's series the linearized equation was produced.
My question is how the following relations came to being?
$$\dot{x}=\dot{x}^{\circ}+\Delta \dot{x},\; \ddot{x}=\ddot{x}^{\circ}+\Delta \ddot{x}$$ It can't be by derivation of the
$$x=x^{\circ}+\Delta x$$
as the $x^{\circ}$ is a number and it would be 0 after the differentiation and wouldn't result in $\dot{x}^{\circ}$ and $\ddot{x}^{\circ}$.
Also can $\Delta x$ be differentiated? Isn't it supposed to be just a small number?
Those relations are simply a change of variables: you position yourself to the point of interest, and $\Delta x$ (and its derivatives) are simply coordinates that measure distance from your new origin. It's a regular variable, people often don't even use $\Delta$ prefix and just write $x=x_0+u$ or something like that (think of celsius scale measuring relative to freezing point instead of absolute zero).
Of course, if we want the linear approximation to be reasonable, $\Delta x$ still has to be small (in the limit $\Delta x \to {\rm d}x$, the approximation becomes exact, because that's exactly what a Taylor series does - the linear term stands next to the derivative).
Short story even shorter: $\Delta x$ is a regular variable, it can be differentiated, and manipulated further just like any other variable. It's the Taylor expansion what then assumes it to be small.
EDIT:
Your original question seems to be how do you get the derivative versions. You have to realize that your $x$ and $y$ are some two dependent variables of some third independent variable $t$, and the dot differentiates with respect to this third parameter. You can interpret $x(t)$ physically as trajectory of some object in space. Then, $\dot{x}(t)$ is the velocity and $\ddot{x}(t)$ is the acceleration. Now, let's pick initial "time" $t=t_0$, and define $$x^\circ = x(t_0),\quad \dot{x}^\circ=\dot{x}(t_0),\quad \ldots$$ You see this is perfectly well defined: you are not differentiating the constant with respect to time... you're taking the derivative and checking its value at some time.
The displacements $\Delta x, \Delta \dot{x}, \Delta \ddot{x}$ are then simply the matter of specifying how far your values differ from this reference state at $t_0$. The key is in the order of operations: first differentiate and then evaluate, not the other way around. It's the same as in Taylor series: there, you also have $$f(t_0+\Delta t)=f(t_0)+\dot{f}(t_0)\Delta t+\frac{1}{2}\ddot{f}(t_0)\Delta t^2+\cdots$$ here, $\dot{f}(t_0)$ also doesn't mean the derivative of $f(t_0)$, but derivative, evaluated at $t_0$.