My question considers if an autonomous system having a exponentially stable steady-state will continue to do so for non-vanishing small perturbations.
Consider the system
$\frac{d}{dt}x=f(x)+\epsilon g(x)$,
with $x\in\mathbb{R^n}$, and $f,g$ continuously differentiable vector fields.
Assume that the system for $\epsilon=0$ has an (not necessarily globally) exponentially stable steady-state at $x_0^{SS}$. Intuitively, I would assume that for $|\epsilon|\ll 1$, the system should also have an exponentially stable steady-state $x_\epsilon^{SS}$. My questions: (i) Does this hold, (ii) if it holds, what is the name of the theorem saying so, or where do I find it?
Note: I am talking about non-vanishing perturbations, i.e. $g(x_0^{SS})\neq 0$, implying in general that $x_\epsilon^{SS}\neq x_0^{SS}$, if $x_\epsilon^{SS}$ exists (what I don't know).
What I tried: (1) I read the corresponding chapters in Khalil's "Nonlinear systems". There, it seems to be always assumed that $g$ depends on the time $t$. Consequently, I only found theorems saying that one can bound the difference: $\|x_\epsilon(t)-x_0(t)\|<b$ if $|\epsilon|\ll 1$, but not that the system has still a steady state (it could e.g. oscillate). My question seems to be easier, maybe too easy. (2) I found the concept of structural stability, and I understand it such that the system is structurally stable if $\epsilon=0$ is not a bifurcation point. Structural stability seems to imply that the system with $|\epsilon|\ll 1$ will still have a stable SS, which is what I want. But I couldn't connect exponential stability with structural stability. Again, everything I found did only concentrate on (for me) too general cases.
It really helps that $f$ and $g$ are continuously differentiable: this allows you to use a (multidimensional) version of Taylor's theorem -- in other words, to use Taylor series.
We know that $f(x_0) = 0$, and we suspect that the solution to $f(x) + \varepsilon g(x) = 0$ must be nearby. Let's try just that, so write $x_\varepsilon = x_0 + \varepsilon x_1$: we get $f(x_\varepsilon) + \varepsilon g(x_\varepsilon) = f(x_0 + \varepsilon x_1) + \varepsilon g(x_0 + \varepsilon x_1)$. As $f$ and $g$ are continuously differentiable, we can expand them around the point $x_0$ using Taylor series, yielding $f(x_\varepsilon) + \varepsilon g(x_\varepsilon) = f(x_0) + \varepsilon f'(x_0) x_1 + \varepsilon g(x_0) + \ldots$, where we've omitted the higher order terms. Note that since $f(x_0) = 0$, we're left with $f(x_\varepsilon) + \varepsilon g(x_\varepsilon) = \varepsilon\left( f'(x_0) x_1 + g(x_0)\right) + \ldots$ . If we demand that $x_\varepsilon$ is a root of $f(x)+\varepsilon g(x)$, we see that it must hold that $f'(x_0) x_1 + g(x_0) = 0$. This allows us to solve $x_1 = -\frac{g(x_0)}{f'(x_0)}$. Therefore, we can express $x_\varepsilon$ explicitly in terms of the given functions and the unperturbed root as $x_\varepsilon = x_0 -\varepsilon\frac{g(x_0)}{f'(x_0)}$.
This is an elementary example of (regular) perturbation theory: if you want to know more, see e.g. MH Holmes, Introduction to Perturbation Methods, chapter 1. In the above, I've skipped over many, many details, but I hope you've been given a taste of how to approach such problems.
To answer your question about a theorem underpinning this: the above is really a manifestation of the implicit function theorem. Write $F(x,\varepsilon) = f(x) + \varepsilon g(x)$: we have $x_0$ such that $F(x_0,0) = 0$ and would like to establish the existence of $x_\varepsilon$ such that $F(x_\varepsilon,\varepsilon)=0$ for $\varepsilon$ close to 0. The most important condition for this to be true is already clear from the above analysis: the whole thing falls apart when $f'(x_0) = 0$. Take for example $f(x) = x^2$, $x_0 = 0$ and $g(x) = 1$. The function $F(x,\varepsilon) = x^2 + \varepsilon$ doesn't have real roots for any positive value of $\varepsilon$. For more on this, see e.g. chapter 6 in Holmes.
Concerning the stability of $x_0$, the reasoning is analogous: as long as $\varepsilon$ is small enough, the eigenvalues of the new stationary state are perturbed but remain negative. If one or more eigenvalues are zero though, you have to be more careful and actually try to calculate the influence of the perturbation on that zero eigenvalue, using the method outlined above.