PID controller convergence

3k Views Asked by At

Is there any material anywhere on convergence of PID controllers? Ie, if we formalize the "plant process" in some way, like $y_{t+1} = f(x_t,y_t)$ (in other words, the process value at a given time depends on the previous process value and controlled value), are there conditions on $f$ that determine if there exist PID coefficients that converge, or on what the coefficients need to be?

I was experimenting with simulated PID controllers and found that it seemed very difficult to get a controller to converge for $f(x_t, y_t) = x_t^2$, and I was curious if the lack of dependence on the previous value ($y_t$) made the convergence difficult.

2

There are 2 best solutions below

1
On

In the linear case a PID closed loop is asymptotically stable (i.e. converges for $t \rightarrow \infty$) iff the real values of all eigenvalues of the closed-loop dynamic matrix are smaller than 0.

Explicitly speaking, in the single-input single-output case you have to compute the eigenvalues of $$ \left[ \begin{array}{ccc} A-(k_P+\frac{k_D}{T_f})\mathbf{b}\mathbf{c^T} & -k_I \mathbf{b} & -\frac{k_D}{T_f^2}\mathbf{b} \\ \mathbf{c}^T & 0 & 0 \\ \mathbf{c}^T & 0 & -\frac{1}{T_f} \end{array} \right], $$ where $A$ is your (autonomous) system matrix, $\mathbf{b}$ is the input vector, $\mathbf{c}^T$ the output vector and $k_P, k_I, k_D, T_f$ the respective controller parameters.

In the nonlinear case that you're dealing with you can either try a linearization routine in the vicinity of a stationairy point and apply the condition given above. Alternatively, you could look for a Lyapunov function such that the energy of the system converges. These concepts usually require the autonomous system to be asymptotically stable, though, supporting your assumption that $f$ being independent of preceding states makes a stability analysis difficult.

In case you want to give it a try, here is the state space representation of the PID controller: $$ \Sigma_{PID} \left\{ \begin{align} & \dot{\mathbf{x}} = \left[ \begin{array}{cc} 0 & 0 \\ 0 & -\frac{1}{T_f} \end{array} \right] \mathbf{x} + \left[ \begin{array}{c} 1 \\ 1 \end{array} \right]e \\ & u = \left[ k_I, -\frac{k_D}{T_f^2} \right] \mathbf{x} + \left( k_P + \frac{k_D}{T_f} \right)e \end{align} \right., \qquad \mathbf{x}(0) = \mathbf{x}_0, $$ where $e$ is the control error, $u$ is the control input and $\mathbf{x}$ are the states of the controller subsystem.

Note that one obvious trait of your system is that you will not be able to stabilize it for states $y_0 > y_c$, where $y_c$ is where you want to have it.

Good reads are:
Hassan K. Khalil - Nonlinear Systems
Alberto Isidori - Nonlinear Control Systems
Steven H. Strogartz - Nonlinear Dynamics and Chaos

2
On

The usual prerequisite for using PID controllers is that the plant itself must be stable in the first place. The plant in your example is marginally stable, so you might want to consider stabilizing the plant (using pole-placement etc.) beforehand.

Secondly, the selection of the coefficients of PID controllers are performed based mostly on heuristic methods as such the Ziegler-Nichols method. The book Modern Control Engineering by Ogata has a whole chapter dedicated for introducing and discussing these methods.


Note that the methods presented assume a plant that can be adequately modeled by inhomogeneous linear ordinary differential equations of the form \begin{equation} \sum_{i = 0}^{n} a_{i}(t)\frac{\mathop{}\!\mathrm{d}^{i}{y}}{\mathop{}\!\mathrm{d}{t}^{i}} = x(t). \end{equation} Your example seems to suggest that you are dealing with a linear difference equation. So, you might want to consider a two-step approach, i.e. design an analog PID controller using the methods presented, and then digitize it.

Also, the input in your example is squared, $x_t^2$. When coupled with a PID controller, could result in a closed-loop system that is highly nonlinear in $y_t$, so your mileage may vary.