Model Predictive Control gives always zero output solution - Why? Do I need soft constraints?

126 Views Asked by At

I have a discrete state space model:

$$x(k+1) = Ax(k) + Bu(k)$$ $$y(k) = Cx(k)$$

And I'm trying to compute the predicted inputs. The first thing I do is that I fist create the extended observability matrix $\Phi$

$$\Phi = \begin{bmatrix} CA\\ CA^2\\ CA^3\\ \vdots \\ CA^{n-1} \end{bmatrix}$$

Then create the lower traingular topelitz matrix $$\Gamma = \begin{bmatrix} CB & 0 & 0 & 0 &0 \\ CAB & CB & 0 & 0 &0 \\ CA^2B & CAB & CB & 0 & 0\\ \vdots & \vdots &\vdots & \ddots & 0 \\ CA^{n-2} B& CA^{n-3} B & CA^{n-4} B & \dots & CA^{n-j} B \end{bmatrix}$$

Where $j=n$

Then I use linear programming:

$$max: c^TU$$ With subject to: $$KU \leq O$$ $$U \geq 0$$

Where $K = \Gamma^T \Gamma$, $c = (\Gamma^T\Gamma)^T(R - \Phi*x)$, and $O = \Gamma^T(R - \Phi*x)$ and also $R$ is the reference vector and $x$ is the state vector, and $U$ is our input signal vector. We want to find that! That's our goal.

Problem:

Assume that our state space model is not 100% correct identified, then $R - \Phi*x$ can be negative due to $\Phi*x$. Which leads vector $U = 0$ because linear programming want to maximize $c^TU$ and $U$ cannot be negative.

The result:

This prediction gives only $U = 0$ if my model is not 100% correct identified. How can I solve this, so even if my model is not 100% correct identified, $R - \Phi*x$ can never be negative e.g $R - \Phi*x \geq 0$

This comes from a practical experiment.

Edit:

The reason why I'm using linear programming (Simplex method) is because it's simple and easy.