Stability of discrete-time dynamical systems using Lyapunov stability where A is function of optimization variable

72 Views Asked by At

Hi I am trying to solve a constrained optimization problem using the Lyapunov stability. In the problem we aim to find $\beta$ such that

$$\min_\beta ||\beta^TF-y|| \quad \text{s.t.}\quad A^{T}PA-P<0$$ for some $P>0$ where $P$ is symmetric. These conditions ensure that $A$ is stable which is required in our application.

From my understanding, this constraint is a Linear Matrix Inequality (LMI) which is convex. However, as in my problem A is also function of the optimization problem $\beta$ the problem is non convex and cannot be solved using semi-definite programming (SDP). In particular, the first row of $A$ consists of the vector $\beta$ and the following rows are the identity matrix (shifted) such that $y_{t+1}=Ay_t$ where $y_t=[y(t-1), y(t-2),...,y(t-L)]^T$ and $\beta=[\beta_1,\beta_2,...,\beta_L]$ such that $$A = \begin{bmatrix} \beta_1 & \beta_2 & \beta_3 & \cdots & \beta_{L-1} & \beta_{L}\\ 1 & 0 & 0 & \cdots & 0 & 0 \\ 0 & 1 & 0 & \cdots & 0 & 0 \\ 0 & 0 & 1 & \cdots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & 1 & 0 \end{bmatrix} $$ Are there any ideas on how to solve for $\beta$. Many thanks!

1

There are 1 best solutions below

2
On BEST ANSWER

This is not a problem that is directly convex but you can easily derive an algorithm that can find a locally optimal solution.

First note that the problem can be reformulated as

$$ \begin{array}{ll}\min_{P\succ0,\beta\in\mathbb{R}^n} \gamma \quad \text{s.t.}\quad& (A_0+B\beta^T)^{T}P(A_0+B\beta^T)-P\prec0,\\ & ||F^T\beta-y||\le\gamma. \end{array} $$ where $A_0$ and $B$ are obvious from the definition of $A$. This problem is equivalent in turn to

$$ \begin{array}{ll}\min_{P\succ0,\beta\in\mathbb{R}^n}\gamma \quad \text{s.t.}\quad& \begin{bmatrix} -P & (A_0+B\beta^T)^{T}P\\ P(A_0+B\beta^T) & -P \end{bmatrix}\prec0,\\ &\begin{bmatrix} -\gamma & (F^T\beta-y)^T\\ F^T\beta-y & -\gamma I \end{bmatrix}\preceq0. \end{array} $$

For a fixed $\beta$, the above problem is an LMI problem in $P,\gamma$. Conversely, for a fixed $P$, the above problem is an LMI problem in $\beta,\gamma$. We will exploit this in our algorithm.

Now, we can easily find $\beta$ such that $A+B\beta^T$ is Schur stable because $A$ is in companion form and $\beta$ will contain the coefficients of the characteristic polynomial, which can be chosen to be Schur stable. Alternatively, $\beta$ can be computed by solving an LMI problem.

Now this $\beta$ will serve as an initialization point for our algorithm that reads as follows:

  1. Given $\beta$, solve the above LMI problem for $P\succ0$ and $\gamma>0$. Store $P$.
  2. Given $P$ solve the above LMI problem for $\beta$ and $\gamma>0$. Store $\beta$.
  3. Evaluate a stopping condition (e.g. $\gamma$ does not change anymore). If it is satisfied, then stop the algorithm and return $(\gamma,P,\beta)$ otherwise go to Step 1.

This algorithm is not guaranteed to converge globally, but $\gamma$ is guaranteed to be nonincreasing. So, the sequence of the computed $\gamma$'s will converge.

If needed, several initial conditions can be considered to explore the initialization space and increase the chances of getting a global optimum.