For a linear discrete discrete state space model:
$$x(k+1) = Ax(k) + Bu(k)$$ $$y = Cx(k) + Du(k)$$
I can choose the best inputs $U = \begin{bmatrix} u(k) & u(k+1) & u(k+2) & ... & u(k+ N_c) \end{bmatrix}^T$
By minimizing a cost function
$$J = X^TQX + U^TRU$$
Where:
$$X = F x(k) + \Phi U = \begin{bmatrix} x(k+1) & x(k+2) & x(k+3) & ... & x(k+1+N_p) \end{bmatrix}^T$$
Where:
$$F = \begin{bmatrix} A\\ A^2\\ A^3\\ \vdots \\ A^{N_p} \end{bmatrix} , \Phi = \begin{bmatrix} B &0 &0 &\cdots & 0\\ AB & B & 0 & \cdots & 0\\ A^2B& AB & 0 &\cdots &0 \\ \vdots & \vdots & \vdots & \vdots &\vdots \\ A^{N_p-1}B & A^{N_p-2}B & A^{N_p-3}B & \cdots & A^{N_p-N_c}B \end{bmatrix}$$
The variables $N_p$ and $N_c$ are prediction horizon and control horizon. The matricides $Q > 0$ and $R > 0$ are tuning matricides.
My goal is to minimize this cost function $$J = X^TQX + U^TRU$$
Where I say that the state vector $x(k)$ cannot be below or higher some values, e.i constraints. In reality, determining the maximum and minimum position for the system is not difficult, but the velocity is far more difficult.
And here stability margins comes in. If the state vector(trajectory) groes from one position to a another position with a certain velocity, the system can be unstable. Just because it's a predictive controller doesn't mean that the system would be 100% robust against overshoot.
To recive stability margins, e.i robustness. I would use lyapunov equation:
$$AXA^T -P + Q = 0$$
Where we want to solve $P$. Where $Q = Q^T > 0$, e.g $Q = I$ (not the same as the cost function) and $A$ which is system matrix from the discrete state space model.
Then we pick up the lyapunov function candidate
$$V(x(k)) = \frac{1}{2}x(k)^TPx(k)$$
Then we take the derivative of $V(x(k))$. Just take the difference between the past computation of lyapunov function and the current computation of lyapunov function. Only works in the discrete case.
$$\dot{V}(x(k)) = V(x(k)) - V(x(k-1))$$
If $\dot{V}(x(k))$ goes up -> bad, if $\dot{V}(x(k))$ goes down -> good
Question:
As I said...the goal is to minimize the cost function
$$J = X^TQX + U^TRU$$
To receive the best input signals for the system. Setting limits(constraints) for position is easy, but limits for velocity can be difficult. Then my question is:
Can I use the derivative of lyapunov function candidate
$$\dot{V}(x(k)) = V(x(k)) - V(x(k-1))$$
To compute the constraints for velocity before my quadratic solver minimize the cost function?
If I see $\dot{V}(x(k))$ is increasing, then I will decreasing the velocity limits(constraints) for the quadratic solver. If I see $\dot{V}(x(k))$ is decreasing, then I will increasing the velocity limits(constraints) for the quadratic solver.
A normal cost function for QP looks like this:
$$J = \frac{1}{2}x^TQx + c^Tx$$
With subject to:
$$Ax \leq b$$
Where $b$ is the constratins. Assume that we have the constratins written instead for the velocity:
$$Ax \leq b - \dot{V}(x(k))$$
As Johan Löfberg mentioned, you can define soft constraints.
The conventional constraints are hard constraints which means that they should not be violated under any circumstance:
$$x_{\min}\le x_i \le x_{\max}$$
However, soft constraint means defining a constraint which should not be violated unless with a heavy penalty:
$$x_{\min}-s_v\le x_i \le x_{\max}+s_v$$
where $s_v$ is called a slack variable. Sometimes this variable is shown by $\epsilon$ too.
As we like $s_v=0$, we need to heavily penalize it in the cost function.
$$J_{\text{new}}=J_{\text{previous}}+\rho ||s_v||^n$$
where $\rho$ is a heavy penalty. Here $n$ can be $1$ or $2$.
Hence in your case, let the constraints on $V$ as hard and allow velocity exceeds boundaries with a heavy penalty.