Continuous-time dynamical system that settle down to KKT system

63 Views Asked by At

I have the following smooth and convex optimization problem $\mathcal{P}$ \begin{equation} \min f(x) \\ \text{subject to} ~~~ g(x) \leq 0 \end{equation} where $x^* \in \mathbb{R}^n$ is an optimal solution of $\mathcal{P}$ if and only if $\exists u^* \in \mathbb{R}^m$ such that $(x^{*^T}, u^{*^T})^T$ satisfies the following KKT system \begin{cases} u^* \geq 0, ~~ g(x^*) \leq 0 , ~~u^{*^T}g(x^*) = 0 \\ \nabla f(x^*) + \nabla g(x^*)^Tu^* = 0\end{cases}

The idea now is to construct a continuous-time dynamical system that will settle down to the KKT pair of the convex problem $\mathcal{P}$ and its dual. And here is the proposed recurrent neural network model with its dynamical equations that will settle with the presented KKT system. \begin{cases} \frac{dx}{dt} = -(\nabla f(x^*) + \nabla g(x^*)^T(u + g(x))^+) \\ \frac{du}{dt} = (u + g(x))^+ - u\end{cases} where $(u + g(x))^+ = ([u_1 + g_1(x)]^+, \dots, [u_m + g_m(x)]^+)$ and $[u_m + g_m(x)]^+= \max(u_m + g_m(x), 0)$

I'm not so familiar with the dynamic systems actually, that's why I don't understand well how we got such a model.

Thanks to this thesis, I figured out that for $\mathcal{P}$ we have the following \begin{cases} \dot{x} = -\frac{\partial L(x, \lambda)}{\partial x} = - \nabla_x L(x, \lambda) \\ \dot{\lambda_i} = [\frac{\partial L(x, \lambda)}{\partial \lambda_i}]^+ = [g_i(x)]^+_{\lambda_i} ~~~~~~~ i = 1, \dots, m\end{cases} where $L(x, \lambda) = f(x) + g(x)^T\lambda ~$ is the Lagrangian $$[g_i(x)]^+_{\lambda_i} = \begin{cases} g_i(x) & \text{If}~ \lambda_i > 0 ~\text{or}~ g_i(x) > 0 \\ 0 & \text{Otherwise}\end{cases}$$

I'd appreciate it if someone can explain to me how are the definitions of $\dot{\lambda_i}$ and $\frac{du}{dt}$ related to each other?