This is part of an exercise from Shankar's book Nonlinear Systems: Analysis, Stability, and Control, Problem 8.11, p.381, which is a satellite control problem.
I rephrase my question as follows. Give a bilinear control system
$$\dot{g}=g\left(u_1(t)\begin{bmatrix}0&0&0\\0&0&-1\\0&1&0\end{bmatrix}+u_2(t)\begin{bmatrix}0&0&1\\0&0&0\\-1&0&0\end{bmatrix}\right)$$ or in short notation $$\dot{g}=g(u_1E_1+u_2E_2)$$ where $g\in SO(3)$, and $u_1,u_2$ are control terms. The objective is to steering the system from some given initial state $g_i$ to some final desired state $g_f$ within some finite time $T$. Now we restrict controls to be of the form $$u_1(t)=a_0+a_1\sin2\pi t$$ $$u_2(t)=b_0+b_1\cos2\pi t$$ where $a_0,a_1,b_0,b_1$ are constants. The question is, how to choose the four constants?
A general solution is out of the question, since $E_1$ and $E_2$ do not commute. In fact their Lie bracket generates another matrix $E_3$ and together they form a basis for the lie algebra ${so}(3)$.
At the end of Shankar's paper The structure of optimal controls for a steering problem, controls of similar form is given, but the frequency is not prescribed.
The original exercise also points to references 257 and 320, but I find them useful for other parts of the original exercise.
My last thought would be to discretize the interval $[0,T]$, and treat control on each time slot to be constant. The general solution for this approximation would be a function of $a_0,a_1,b_0,b_1$. The rest is essentially solving an optimization problem on $\mathbb{R}^4$, but I think this is far from ideal.
Is there any analytical way to solve this? Many thanks!