I have a hard time seeing where an ODE and its solution or a system of ODE's and its solution comes into play in control theory.
As far as I understand we generally consider som kind of mix of the "phase/state space" idea and the in- and out-put idea when we consider the control problem.
I cant find a worked example involving a box plot, the state space model and the ODE, where the states, in- and out-put etc are specified in detail. Most examples seem to just say that the equation describes this problem but not in which way.
I am therefore looking for such an example!
The state space model takes the following form in my book \begin{equation} \frac{dx}{dt}=f(x,u), y=h(x,u) \end{equation}
Control theory concerns itself with dynamic systems. How are dynamic systems classically modelled? By a set of ordinary differential equations. Usually, the simplest type of control system you would encounter in a textbook on the subject matter is a spring-mass-damper system, as @Li Chun Min mentioned in the comments. Such a one-dimensional system, in an ideal world, would be governed by the following differential equation:
$$ M\frac{d^2y(t)}{dt^2} + C\frac{dy(t)}{dt} + Ky(t) = u(t) \quad\Rightarrow\quad M\ddot{y} + C\dot{y} + Ky = u $$
You can think of it this way: if you apply a force $u(t)$ on the mass, its displacement will be $y(t)$. If you are wondering where this equation came from, it can be derived in classical mechanics by application of Newton's second law. The notation that I have used above is slightly different than what you would typically find in physics books: $m\ddot{x} + c\dot{x} + kx = f(x)$. This is because $f(t)$ and $x(t)$ are the input and output of your system, respectively, and they have their own notation in control theory. Finally, the transfer function of the system can be derived to be:
$$ G(s) = \frac{1}{Ms^2 + Cs + K} $$
And now the output can be related to the input as follows:
$$ Y(s) = G(s)U(s) $$
Now you can control your system, job done. Classical control, however, has limitations. For example, it gets tedious or nigh impossible to model systems with multiple inputs and outputs. Modern control uses state-space representation to get around those limitations. The idea is that the system has a new property called state, $x(t)$. The current state of the system and the input, $u(t)$, determine the next state, $\dot{x}(t)$. Likewise, the output, $y(t)$, depends on the current state and the input. Putting all of this information in equations yields the model in your question:
$$ \begin{align} \dot{x} &= f(x,u) &= Ax + Bu\\ y &= h(x,u) &= Cx + Du \end{align} $$
Working with the same system as above (spring-mass-damper), you would have two state variables because the system is of second order. The state-space model would then be:
$$ \begin{align} \begin{bmatrix} \dot{x}_1 \\ \dot{x}_2 \end{bmatrix} &= \begin{bmatrix} 0 & I \\ -M^{-1}K & -M^{-1}C \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} + \begin{bmatrix} 0 \\ M^{-1} \end{bmatrix} u \\ y &= \begin{bmatrix} 1 & 0 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} \end{align} $$
And there you have it. The same system modelled both classically and in state-space.