in control theory, there exists a mathematical model of a control loop, called the state-space model. It allows for the computation of a state vector $x$ at discrete time $k$. For the computation, it requires knowledge of the previous state vector at $k-1$ only.
So, I wonder: Is it correct to say the state-space model has the Markov property? Markov property is also called memoryless property, but it is defined in the context of probability distributions. So I am not sure if my statement is valid.
A general state space model can be very complicated. Dynamical systems with hysteresis will definitely not have the Markov property.
If you can write down your system as
$$x_{k+1} = f(x_k,u_k)$$
then it will have the Markov property because only the current state and the current action/actuation are important for the next state.