Let's say I have the standard, time-invariant, linear control theory problem below:
$\dot{\mathbf{x}}(t)=A\mathbf{x}(t)+B\mathbf{u}(t)$
$\mathbf{y}(t)=C\mathbf{x}(t)$
Incidentally, this is a digital control theory problem, so it's really discrete. I've glossed over this detail so far, but I would ultimately like to achieve dead-beat control.
Let's initially propose the vector representations
$ \mathbf{x}= \begin{bmatrix} x \\ \dot{x} \\ y \\ \dot{y} \\ \theta \\ \dot{\theta} \end{bmatrix} \quad \mathbf{u}=\begin{bmatrix} F \\ \ddot{\theta} \end{bmatrix} $
where $x,y$ represent position in the plane and $\theta$ an angular heading.
Given are the nonlinear differential equations (where $b$ is a friction constant):
$\ddot{x}=-b\dot{x}+F\cos{\theta}$
$\ddot{y}=-b\dot{y}+F\sin{\theta}$
Problem 1. $F$ is in my case a binary switch (off or on). It can only be 0 or 1. I would like my controller to be aware of this. I've tried researching bang-bang controllers but examples there usually only involve one variable, whereas I have a mix of a binary switch and a continuous variable $\theta$.
One solution is to simply pretend it is continuous, and output 1 if $F>0.5$, and 0 if $F<0.5$. Or do some stochastic mix. This feels like it's prone to slow response and extremely local planning however. Is there any research or methodology I can read about where this is analyzed?
Perhaps this can be made into a dynamic programming problem to find the smallest number of (discrete) steps to control $\mathbf{x},\dot{\mathbf{x}}$ to desired values?
Problem 2. Is there a way I can linearize the equations above (rough approximations can be OK) into the classic control theory form above? (See Problem 1 for constraints on $F$). Perhaps by introducing some new dummy variables?
Thanks!