For many systems, it seems to be common practice to stack controllers on top of each other. For example, in a quadcopter, one first builds an attitude controller, then builds a velocity controller whose (virtual) control variable is an input to the attitude controller, then builds a position controller whose (virtual) control variable is an input to the velocity controller. I believe (correct me if I'm wrong) that this is known as cascade control.
I'm wondering if there is any theory formalizing why this approach works. To be more precise, suppose I have the following two systems:
- $x' = f(x,u)$ (e.g. a position controller with virtual control variable $u$)
- $y' = g(y,v)$ (e.g. a velocity controller with actual control variable $v$)
Furthermore, suppose I have feedback control $a(x)$ that stabilizes the first system and $b(y)$ that stabilizes the second. In other words, the following two systems are asymptotically stable:
- $x' = f(x, a(x))$
- $y' = g(y, b(y))$
Now we can form the cascade of these two systems, which I believe is formalized as:
- $x' = f(x, y)$
- $y' = g(y, b(y - a(x)))$
It seems like cascade controller designers rely on the fact that this cascade will be asymptotically stable because the two subsystems are separately asymptotically stable. However, something tells me that this is not in general true. I feel like I'm missing something. Are there conditions under which this cascade is asymptotically stable (e.g. f and g are linear)? If not, how does one design an asymptotically stable cascade controller?
I did come upon a technique called backstepping, which builds feedback control for a system using feedback control for a subsystem and a Lyapunov function for the subsystem. For example, if you have feedback control a(x) which stabilizes $x' = f(x) + g(x)*a(x)$ and you have a Lyapunov function guaranteeing this stability, then you can build a function $b(x)$ stabilizing
- $x' = f(x) + g(x)*y$
- $y' = h(y) + j(y)*b(x)$
Unfortunately, I don't think that this approach quite solves my problem. Backstepping would be a way of producing a controller that stabilizes position via acceleration, given a controller that stabilizes position via velocity. However, I want to produce a controller that stabilizes position via acceleration, given a controller that stabilizes position via velocity and a fixed controller that stabilizes velocity via acceleration. This would allow me to modularly build an attitude controller, then a velocity controller on top of that, then a position controller on top of that, etc.
I'm going to attempt to answer my own question based on some research and one of the comments from Pait. There doesn't seem to be a completely general approach to analyzing stability of a cascade based on the stability of its subsystems, but there do appear to be a few somewhat general approaches:
Input to state stability (ISS). In this approach, one proves input to state stability of a cascade $x_1' = f(x_1,x_2), x_2' = g(x_2, u)$ by proving ISS of $x_1' = f(x_1,u)$ and ISS of $x_2' = g(x_2, u)$. I believe there are also conditions under which one only needs to prove global asymptotic stability of $x_2' = g(x_2, u)$ and then one can conclude global asymptotic stability of the cascade.
ISS with small gains. I don't really understand this technique, but the idea seems to be that cascade stability composes as long as the control input $u$ to the lower level system is "small enough" for some definition of small enough.
Singular perturbation/time-scale separation. I also don't really understand this technique, but the rough idea seems to be that if the lower level system ($x_2' = g(x_2, u)$) has significantly fast dynamics than the higher level system ($x_1' = f(x_1,x_2)$), then you can analyze stability separately and the cascade will be stable.