Does this inequality guarantee the global stability in this paper?

76 Views Asked by At

I'm reading a very informative paper. But I met some formulations hard to understand.

In the stability proof section (Sec. V, Theorem 5.2), they define a Lyapunov function as $V(s) = \frac{1}{2}m\|s\|^2$ and obtain the resulting inequality:

$\|s(t)\|\leq \|s(t_0)\|\exp(-\frac{\lambda-L_a\rho}{m}(t-t_0))+\frac{\epsilon_m}{\lambda-L_a\rho}$

I understand when $\frac{\lambda-L_a\rho}{m}>0$, $\|s(t)\|\rightarrow \frac{\epsilon_m}{\lambda-L_a\rho}$ thus it's stable.

However, the problem seems that the in $Assumption 3$ they assume the upper bound $\epsilon_m$ exists in a compact set $\mathcal{X}$, meaning the result is only valid in the set, not globally.

Moreover, since they just assume $\epsilon$ has upper bound within the compact set and their result directly has the $\epsilon_m$ term, it seems also necessary to show the states always stays in the compact set. And to me, such statement seems missing

Please help me understand why this inequality and the assumption guarantee the global stability.

*FYI, I asked them this question directly 2 month ago (plus the reminder 2 weeks ago) and I do not have the reply yet.

1

There are 1 best solutions below

8
On BEST ANSWER

Yes. Assumption 3 places a bound on the learning error of the map $\hat f_a$ that maps control actions & states to the unknown translational force. It is the translational force they are trying to predict in order to effectively cancel out its effects when doing position tracking.

The training does not happen while the controller is running, so basically they are saying that they can train this map $\hat f_a$ to be within some reasonable upper bound of error $\epsilon_m$ away from the true disturbance force $f_a$. This then appears in the stability proof as a constant (obviously) since any error in predicted translational force acts as an obstruction to asymptotic stability. The more accurate the model is trained, the closer the convergence of the controller will be because we can more closely predict and cancel the predictable disturbance forces (e.g. ground effect).

I haven't checked every single detail of the proof to be fair, but under the presented assumptions I can imagine a global stability result could exist. After all, the paper is learning certain disturbance forces and cancelling them out with feedback linearization; so if you can learn it well, then yeah a global (for an appropriate sense of global) result is feasible. Of course, if somewhere else they restrict the state to a compact set, sure, but I can't find that.