Is there any material anywhere on convergence of PID controllers? Ie, if we formalize the "plant process" in some way, like $y_{t+1} = f(x_t,y_t)$ (in other words, the process value at a given time depends on the previous process value and controlled value), are there conditions on $f$ that determine if there exist PID coefficients that converge, or on what the coefficients need to be?
I was experimenting with simulated PID controllers and found that it seemed very difficult to get a controller to converge for $f(x_t, y_t) = x_t^2$, and I was curious if the lack of dependence on the previous value ($y_t$) made the convergence difficult.
In the linear case a PID closed loop is asymptotically stable (i.e. converges for $t \rightarrow \infty$) iff the real values of all eigenvalues of the closed-loop dynamic matrix are smaller than 0.
Explicitly speaking, in the single-input single-output case you have to compute the eigenvalues of $$ \left[ \begin{array}{ccc} A-(k_P+\frac{k_D}{T_f})\mathbf{b}\mathbf{c^T} & -k_I \mathbf{b} & -\frac{k_D}{T_f^2}\mathbf{b} \\ \mathbf{c}^T & 0 & 0 \\ \mathbf{c}^T & 0 & -\frac{1}{T_f} \end{array} \right], $$ where $A$ is your (autonomous) system matrix, $\mathbf{b}$ is the input vector, $\mathbf{c}^T$ the output vector and $k_P, k_I, k_D, T_f$ the respective controller parameters.
In the nonlinear case that you're dealing with you can either try a linearization routine in the vicinity of a stationairy point and apply the condition given above. Alternatively, you could look for a Lyapunov function such that the energy of the system converges. These concepts usually require the autonomous system to be asymptotically stable, though, supporting your assumption that $f$ being independent of preceding states makes a stability analysis difficult.
In case you want to give it a try, here is the state space representation of the PID controller: $$ \Sigma_{PID} \left\{ \begin{align} & \dot{\mathbf{x}} = \left[ \begin{array}{cc} 0 & 0 \\ 0 & -\frac{1}{T_f} \end{array} \right] \mathbf{x} + \left[ \begin{array}{c} 1 \\ 1 \end{array} \right]e \\ & u = \left[ k_I, -\frac{k_D}{T_f^2} \right] \mathbf{x} + \left( k_P + \frac{k_D}{T_f} \right)e \end{align} \right., \qquad \mathbf{x}(0) = \mathbf{x}_0, $$ where $e$ is the control error, $u$ is the control input and $\mathbf{x}$ are the states of the controller subsystem.
Note that one obvious trait of your system is that you will not be able to stabilize it for states $y_0 > y_c$, where $y_c$ is where you want to have it.
Good reads are:
Hassan K. Khalil - Nonlinear Systems
Alberto Isidori - Nonlinear Control Systems
Steven H. Strogartz - Nonlinear Dynamics and Chaos