Consider the LTI system \begin{equation}\label{e1} \dot{\mathbf{x}}(t) =A \mathbf{x}(t)+B \mathbf{u}(t) \end{equation} Assume that the system is controllable. it is well known that, if we want to steer the system from $\mathbf{x}(0)=0 $ to a certain target state $\mathbf{x}(t_f)=\mathbf{x}_f$, the control $\mathbf{u}(t)$ that does that and minimizes the energy functional $$E=\int_{0}^{t_{f}} \|\mathbf{u}(t) \|_2^2 \: \mathrm{d} t $$ is given by \begin{equation}\mathbf{u}^{*}(t)=B^{T} \mathrm{e}^{A^{T}\left(t_{f}-t\right)} W ^{-1}\mathbf{x}_{\mathrm{f}}\end{equation} where $$ W=\displaystyle\int_{0}^{t_{\mathrm{f}}} \mathrm{e}^{A\tau} B B^{T} \mathrm{e}^{A^{T}\tau} \: \mathrm{d} \tau $$ is the controllability Gramian matrix.
Now, I would like to write the optimal control $\mathbf{u}^{*}(t)$ in a feedback form, i.e. something like: $$ \mathbf{u}^{*}(t) = -K(t)\mathbf{x}(t)$$
Can anyone show me how to do it? What is the gain $K(t)$ in this case? Can I write it in terms of the Gramian matrix?
In order to do this you can use a slightly more general expression. Namely, the expression for the input that drives the state from $x(t_i) = x_i$ to $x(t_f) = x_f$, which is given by
\begin{align} W(t) &= \int_0^t e^{A\,\tau} B\,B^\top e^{A^\top \tau} d\tau, \tag{1} \\ u(t) &= B^\top e^{A^\top (t_f - t)}\,W(t_f-t_i)^{-1}\left(x_f - e^{A\,(t_f-t_i)} x_i\right). \tag{2} \end{align}
Note that one gets your equation when one uses $t_i = 0$ and $x_i = 0$ in $(2)$.
In order to get a feedback policy one can replace $t_i$ and $x_i$ in $(2)$ with $t$ and $x(t)$ respectively, yielding
$$ u(t) = B^\top e^{A^\top (t_f - t)}\,W(t_f-t)^{-1}\left(x_f - e^{A\,(t_f-t)} x(t)\right). \tag{3} $$
The expression from $(3)$ can be seen as evaluating $(2)$ using $t_i$ equal to the current value of $t$ and only evaluating $u(t)$ at this current time.
It can be noted that $W(0) = 0$, thus in the limit of $t$ to $t_f$ the expression for $W(t_f-t)^{-1}$ in $(3)$ blows up.