What is the simplest control to stabilize this system (a scholar example)

126 Views Asked by At

I want to show a scholar example of a system such that its linearization is not controllable but the system can be stabilized with nonlinear feedback. I am thinking about this one $$ \begin{aligned} \dot{x}_1 &= x_2^3, \\ \dot{x}_2 &= u, \end{aligned} $$ where $x_1$, $x_2$, and $u$ are scalars, and the goal is to drive the system to the origin.

What is the simplest control that you would propose for this system?

1

There are 1 best solutions below

5
On BEST ANSWER

Even though the linearization is not controllable, the nonlinear system can still be stabilized with linear feedback. I propose the control law

$$ u = -x_1 - x_2 \tag{1} $$

which leads to the closed loop dynamics

$$ \begin{align} \dot{x}_1 &= x_2^3 \\ \dot{x}_2 &= -x_1 - x_2 \end{align} \tag{2} $$

Take the Lyapunov function

$$ V(x) = x_1^2 + 2 x_1 x_2 + x_2^2 + \frac{1}{2} x_2^4 $$

It is easy to show that $V$ has a unique minimum at $(0, 0)$ so it is positive definite. The derivative is

$$ \dot{V}(x) = -2 (x_1 + x_2)^2 $$

which is negative semi-definite (zero along the $x_1 = -x_2$ line). If we insert that into $(2)$, we have $\dot{x}_2 = 0$ but $\dot{x}_1 = -x_1^3$, so no solution can stay in the set $\dot{V}(x) = 0$ except $x_1 = x_2 = 0$.

So, by LaSalle, the system is globally asymptotically stabilized by the linear feedback $(1)$.

This is probably also the "simplest" stabilizing control law (linear feedback with both gains being 1), but that depends on your definition of simple.