I am trying to solve an optimal control problem where we are given a control system
$$ \dfrac{d}{dt}\begin{bmatrix}x_1 \\ x_2\end{bmatrix} =\begin{bmatrix} 2 & 1\\ 1 & 3\end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} + \begin{bmatrix} 3 & 3\\ 5 & 5\end{bmatrix}\begin{bmatrix} u_1 \\ u_2 \end{bmatrix} $$
with the initial condition $\mathbf{x}(0)=[3,1]^T$ and it asks me to find a controller, with $u_1$ and $u_2$ in $[0,+\infty] \to \mathbb{R}^2$, that minimizes the following integral function
$$\int_0^\infty \left( \mathbf x^T \mathbf{I} \,\mathbf{x} + \mathbf{u}^T \mathbf{R} \, \mathbf{u} \right) \mathrm d t $$
where
$$ \mathbf{R} = \begin{bmatrix} 1 & 1\\ 1 & 1\end{bmatrix} $$
I have done a couple control system problems but never one where I have to minimize a function, let alone an integral one so I am really lost on how to start. Does anyone know a good place to start or examples that solve similar problems?
This problem is related to the field of optimal control. The first thing that we notice is that the control input is symmetric in the system equation and in the cost function. This means that only the sum of $u_1$ and $u_2$ matters not the individual values Hence, we can simplify the problem by introducing $u=u_1+u_2$
$$\dot{x}=\begin{bmatrix}2&1\\1&3 \end{bmatrix}x+\begin{bmatrix}3\\5 \end{bmatrix}u \text{ , with } x(t=0)=[3,1]^T \text{ and } \lim_{t\to \infty}x(t)=0.$$
Our goal is to minimize
$$J= 1/2\int_0^\infty x^T(2I)x + u\cdot 2\cdot u\,dt.$$
In general, if we have the system
$$\dot{x}=Ax+Bu\text{ , with } x(t=0)=x_0,\text{ and } \lim_{t\to \infty}x(t)=0.$$
And if we want to minimize the cost
$$J = \dfrac{1}{2}\int_{t=0}^{\infty}x^TQx + u^TRu \, dt$$
We will have to solve the algebraic Riccati equation (short ARE) for the symmetric matrix $P$
$$-P^{}A^{}-A^TP^{}-Q^{}+P^{}B^{}R^{-1}B^TP^{}=0.$$
Then solve for the optimal state
$$\dot{x}^*(t)=[A-BR^{-1}B^TP^{}]x^*(t), \text{ with } x^*(t=0)=x_0.$$
The minimizing control law is given by
$$u=-R^{-1}B^TR^{}x^*(t).$$
If the obtained solution does not satisfy $u\geq 0$ for all $t$, then you will need to use Pontryagin's maximum principle, which is a little bit more difficult than the unconstrained problem.
Reference: The best reference that I know for optimal control is the book Optimal Control systems (Naidu). In my opinion, it is one of the best books on control systems in general. Highly recommend buying a copy :D.