I am currently doing an academic study to study the effect of various tuning parameters on the MPC controller performance. I have a model of a chemical reaction systems (van de Vusse reaction) in a CSTR and the plant is assumed to operate at steady state initially. The MPC operates based on the linearized model of the plant approximated using Jacobian linearization. Setpoint change are introduced at time 1. The sampling time and prediction horizon of the controller are fixed at 1s and 15s respectively. The control horizon is varied among 1s, 3s and 15s and the Simulink result are displayed as below.
It is seen that when control horizon is too low (1s), the MPC controller does not have enough degree of freedom to perform optimization correctly, therefore poor response; However, when the control horizon is high (15s), the controller performance also deteriorates compared to the controller with control horizon of just 3s. I have tested other values of control horizon as well and it is observed that 2s is the optimal control horizon that yields the fastest response, and the responses become slower when control horizon increases past 2s.
I have come across multiple sources saying that increasing control horizon should yield a better MPC controller performance, which makes sense in my mind. Also, I have seen people using control and prediction horizon of same value and have no problem with the performance. I am just confused on the reason behind the deteriorating of response speed when the control horizon becomes too large in my simulation.
