Extremum Seeking Control vs Lyapunov Stability

87 Views Asked by At

Consider a dynamical system $$\dot{x}=f(x)+g(x)u$$ and suppose the goal is to stabilize the state $x$ to the origin using a control input $u$.

The Lyapunov approach is to formulate a Control Lyapunov Function (CLF) but we may not be able to find one, in which case we will try to find a CLF that has a bigger region of attraction (ROA).

Extremum Seeking control does not require any knowledge of $f$ or $g$, and provides a hill-climbing method to optimize $y=h(x)$, which we set as $h(x)=-|x|^2$ for sake of simplicity. The paper "Extremum Seeking Control: Convergence Analysis" by Nešic (2009) seems to indicate that there exists a trade-off between the convergence rate and ROA. I.e., fast convergence means smaller ROA and vice versa.

Does this mean that if we could allow the convergence rate to be any arbitrary value, we could have a stability starting from any point? Does this mean it is stronger than the Lyapunov approach despite not having to know any information of the dynamics?