I'm going to implement an quadratic optimizer with C for embedded systems. I will do that because I need speed.
But I have some trouble to find a quadratic optimizer for C that works with embedded systems. I'm using 32 bit CPU.
This is not a question about embedded nor C. This is a question of it's possible to simulate for example an input between 0 to 255, 8-bit resolution, for 50 times each, which will be 256*50 = 12800 times.
I have an idea of simulating a model with constant signal, then compare the signal with the reference value.
$$J(u_k) = abs(R-sum(y))$$
Where $u$ is the index variable of the current input. Then another input values is used $u = {0,1,2,3,4,5,...,u_k}$ for the constant simulation, and we get another $J$
Then we find which $J$ values is the smallest and compute its index. The value we are going to get will be between 0 and 255.
The procedure can be viewed like this:
- Simulate with $u = 0$ in 50 loops with Euler-forward
- Sum the output vector $y$
- Compare the absolut value of output vector with the reference vector $R$
- Repeat with $u = 1$ for 50 loops with Euler-forward
- .....
- ....
Will that work? Or is there a better way? Like compute the unconstrained Model Predictive Controller with saturation of the input e.g 255. I only need constraints on the input. Not the states.
First, it sounds like a pretty poor MPC implementation to use a control horizon of only 1.
Besides that, finding that optimal input using brute-force enumeration on a discretized grid is both unnecessarily complicated and approximate. The solution to a scalar QP can be computed analytically by simply computing the unconstrained optimal solution (which is a linear map from the current state), and if it violates the constraints, the optimal solution is to saturate it.