I have a discrete state space model as a desire reference model:
$$x_m(k+1) = A_m x_m(k) + B_m r(k)$$ $$y_m(k) = C_mx_m(k) + D_m r(k)$$
Then I have a discrete state model as real process model:
$$x(k+1) = Ax(k) + B u(k)$$ $$y(k) = Cx(k) + D u(k)$$
And the adaption gain law is: $$u(k) = (\theta^0 - \theta(k))Mr(k) - Lx(k)$$ $$L = (B_m^T B)^{-1}B_m^T(A - A_m)$$ $$M = (B_m^T B)^{-1}B_m^T B_m$$
Where $\theta^0 = 1$
I have an error: $$e(k) = y_m(k) - y(k)$$
And I want $e(k) = 0$, which depends on $\theta(k)$. To solve this problem, I need to use the discrete Lyapunov function:
$$V(k) = \frac{1}{2} [ e^T(k) P e(k) + (\theta^0 - \theta(k) )^T ( \theta^0 - \theta(k) ) ], k = 0, 1,2,3,..., n$$
We can say that $\theta^0 - \theta(k)$ is $\phi(k)$
$$V(k) = \frac{1}{2} [ e^T(k) P e(k) + \phi^T(k) \phi(k) ], k = 0, 1,2,3,..., n$$
And $P$ is the solution to this discrete Lyapunov equation.
$$A_mPA_m^T - P + Q = 0$$ $$ Q = Q^T > 0$$
Question:
I want to finding the derivative $dV(k)$ of $V(k)$ and then find the derivative of $\phi(k)$ so $dV(k) < 0$ will allways be negative as long $e(k) \neq 0$
Edit:
We say that $x_e(k) = x_m(k) - x(k)$ instead
Then we try to minimize the lyapunov candidate function
$$V(k) = \frac{1}{2} [ x_e^T(k) P x_e(k) + \phi^T(k) \phi(k) ]$$
The derivative of $V(k)$ is:
$$dV(k) = \frac{1}{2} [ x_e^T(k) P x_e(k) + \phi^T(k) \phi(k) ] - \frac{1}{2} [ x_e^T(k-1) P x_e(k-1) + \phi^T(k-1) \phi(k-1) ]$$
If we set $dV(k) = 0$, we can skip $\frac{1}{2}$. $$dV(k) = [ x_e^T(k) P x_e(k) + \phi^T(k) \phi(k) ] - [ x_e^T(k-1) P x_e(k-1) + \phi^T(k-1) \phi(k-1) ] = 0$$