What are the necessary and sufficient conditions for the existence of a bang-bang controller in LTI systems? I think that a condition is that all the eigenvalues must have nonpositive real part. Are there more conditions? What about the multiplicity of the eigenvalues?
2026-03-27 14:53:59.1774623239
On the existence of bang-bang controllers for LTI systems
148 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
1
There are 1 best solutions below
Related Questions in CONTROL-THEORY
- MIT rule VS Lyapunov design - Adaptive Control
- Question on designing a state observer for discrete time system
- Do I really need quadratic programming to do a Model Predictive Controller?
- Understanding Definition of Switching Sequence
- understanding set of controllable state for switched system
- understanding solution of state equation
- Derive Anti Resonance Frequency from Transfer Function
- Laplace Transforms, show the relationship between the 2 expressions
- Laplace transform of a one-sided full-wave rectified...
- Controlled Markov process - proper notation and set up
Related Questions in OPTIMAL-CONTROL
- Do I really need quadratic programming to do a Model Predictive Controller?
- Transforming linear dynamical system to reduce magnitude of eigen values
- Hamiltonian minimization
- An approximate definition of optimal state trajectory of a discrete time system
- Reference request: Symmetric Groups and linear control systems
- Does the Pontryagrin maximum principle in sequential order result in same minimum?
- I can't get my Recursive Least Square algorithm work - What have I miss?
- Will LQR act like MPC in reality?
- Find which gain the process will be unstable?
- How do I find the maximum gain limit for a delayed system?
Trending Questions
- Induction on the number of equations
- How to convince a math teacher of this simple and obvious fact?
- Find $E[XY|Y+Z=1 ]$
- Refuting the Anti-Cantor Cranks
- What are imaginary numbers?
- Determine the adjoint of $\tilde Q(x)$ for $\tilde Q(x)u:=(Qu)(x)$ where $Q:U→L^2(Ω,ℝ^d$ is a Hilbert-Schmidt operator and $U$ is a Hilbert space
- Why does this innovative method of subtraction from a third grader always work?
- How do we know that the number $1$ is not equal to the number $-1$?
- What are the Implications of having VΩ as a model for a theory?
- Defining a Galois Field based on primitive element versus polynomial?
- Can't find the relationship between two columns of numbers. Please Help
- Is computer science a branch of mathematics?
- Is there a bijection of $\mathbb{R}^n$ with itself such that the forward map is connected but the inverse is not?
- Identification of a quadrilateral as a trapezoid, rectangle, or square
- Generator of inertia group in function field extension
Popular # Hahtags
second-order-logic
numerical-methods
puzzle
logic
probability
number-theory
winding-number
real-analysis
integration
calculus
complex-analysis
sequences-and-series
proof-writing
set-theory
functions
homotopy-theory
elementary-number-theory
ordinary-differential-equations
circles
derivatives
game-theory
definite-integrals
elementary-set-theory
limits
multivariable-calculus
geometry
algebraic-number-theory
proof-verification
partial-derivative
algebra-precalculus
Popular Questions
- What is the integral of 1/x?
- How many squares actually ARE in this picture? Is this a trick question with no right answer?
- Is a matrix multiplied with its transpose something special?
- What is the difference between independent and mutually exclusive events?
- Visually stunning math concepts which are easy to explain
- taylor series of $\ln(1+x)$?
- How to tell if a set of vectors spans a space?
- Calculus question taking derivative to find horizontal tangent line
- How to determine if a function is one-to-one?
- Determine if vectors are linearly independent
- What does it mean to have a determinant equal to zero?
- Is this Batman equation for real?
- How to find perpendicular vector to another vector?
- How to find mean and median from histogram
- How many sides does a circle have?
Considering the following system
$$ \dot{x}(t) = A\,x(t) + B\,u(t), \tag{1} $$
with $x(t)\in\mathbb{R}^n$, $u(t)\in\mathbb{R}^m$, $A\in\mathbb{R}^{n\times n}$, $B\in\mathbb{R}^{n\times m}$, $x(0) = x_0\in\mathbb{R}^n$ and $u(t) \in \cal{U}$, where $\cal{U}$ is some convex set around containing the origin (for example $-1 \leq u_i(t) \leq 1\ \forall\ i\in\{1,\dots,m\}$).
Based on what you have written in the comments I interpret the question as which constraints are there on $A$ and $B$ such that there exists a $u(t) \in \cal{U}$ such that $x(t)$ can be driven to the origin for any possible $x_0\in\mathbb{R}^n$ in some finite amount of time.
It can be noted that bang-bang control is not directly relevant. Instead, null-controllability under input constraints is more suitable. Here null-controllability means that for an initial condition $x_0$ there exists some $T\in[0,\infty)$ such that there exists a $u(t) \in \cal{U}$ such that $x(T)=0$. And regarding bang-bang control, if $(1)$ can be driven to the origin using $u(t) \in \cal{U}$, then one of the possible inputs that does this can also be a bang-bang solution.
The short answer is that $(A,B)$ needs to be controllable and all the eigenvalues of $A$ are required to have a nonpositive real [1]. The full controllability is needed otherwise even without the constraint $u(t) \in \cal{U}$ it would not be possible to drive the state to the origin in some finite amount of time. There are no constraints on the multiplicity of the eigenvalues.
In order to see why multiplicity of the eigenvalues is not an issue I will consider only one real Jordan block with eigenvalues on the imaginary axis
$$ C = \begin{bmatrix} 0 & \omega \\ -\omega & 0 \end{bmatrix}, \quad A = \begin{bmatrix} C & I \\ & C & \ddots \\ & & \ddots & I \\ & & & C \end{bmatrix}, \quad B = \begin{bmatrix} 0 \\ \vdots \\ 0 \\ 1 \end{bmatrix}, \tag{2} $$
with $I$ the $2\times2$ identity matrix and $A$ having $m$ times $C$ along its diagonal (so $A\in\mathbb{R}^{2m\times2m}$). The solution for $x(t)$ using $(2)$ in $(1)$ can be expressed using the following convolution integral
$$ x(t) = e^{A\,t}x(0) + \int_0^t e^{A\,(t-\tau)} B\,u(\tau)\,d\tau, \tag{3} $$
where the matrix exponential of a Jordan block using the $A$ from $(2)$ can be shown to be
$$ e^{A\,t} = \begin{bmatrix} e^{C\,t} & t\,e^{C\,t} & \cdots & \frac{t^{m-1}}{(m-1)!}e^{C\,t} \\ & e^{C\,t} & \ddots & \vdots \\ & & \ddots & t\,e^{C\,t} \\ & & & e^{C\,t} \end{bmatrix}, \quad e^{C\,t} = \begin{bmatrix} \cos \omega\,t & \sin \omega\,t \\ -\sin \omega\,t & \cos \omega\,t \end{bmatrix}. \tag{4} $$
Even though $e^{C\,t}$ remains bounded, for $m>1$ some entries of $e^{A\,t}$ become unbounded due to the polynomial terms in $t$ (up to order $t^{m-1}$). In order to see that an unbounded $e^{A\,t}$ and thus unforced response $e^{A\,t}x(0)$ in not an issue for stabilizability of $x(t)$, while considering the input constraints $u(t) \in \cal{U}$, one can look at the second, convolution, term of $(3)$. Namely, in the case of $(2)$ if one chooses $u(t) = M\,\sin(\omega\,t+\phi)$, with $M$ chosen small enough such that $u(t) \in \cal{U}$, then the convolution term can also be obtained using an augmented autonomous state space model. This augmented model also uses $A$ from $(2)$ but has $m+1$ times $C$ along its diagonal with all initial condition equal to zero accept the bottom two entries, which represent $u(t)$. The resulting response al follows $(4)$, but since its size is larger it will have polynomials of one order higher. So, no matter how small $M$, eventually at large enough value for $t$ the response of the augmented model can be larger in magnitude than the unforced response. So eventually the reachable set from the convolution integral will grow faster in $t$ than the unforced response.
For example consider numerically solving the following optimisation problem obtained by first discretizing $(1)$ using $(2)$ with $\omega=1$, $k=3$, zero-order hold discretization and time step $\Delta t=0.1$, such that
$$ C = \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}, \quad A = \begin{bmatrix} C & I & 0 \\ 0 & C & I \\ 0 & 0 & C \end{bmatrix}, \quad B = \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 1 \end{bmatrix}, \quad \begin{bmatrix} A_d & B_d \\ 0 & I \end{bmatrix} = e^{\begin{bmatrix} A & B \\ 0 & 0 \end{bmatrix}\Delta t}. $$
The optimisation problem is then defined as minimizing the following cost function
\begin{align} J =& \sum_{k=1}^{N} x_k^\top\,x_k, \\ \text{s.t.}\ & x_{k+1} = A_d\,x_k + B_d\,u_k, \\ & -0.3 \leq u_k \leq 0.3, \\ & x_0 = \begin{bmatrix} 1 & 1 & 1 & 1 & 1 & 1 \end{bmatrix}^\top. \end{align}
For $N=500$ the following numerical results can be obtained:
It can be noted that due to the oscillatory nature of this system, there is no upper bound on the number of switches between lower and upper bounds of the control input, when there is no bound on $\|x_0\|$.
[1]: Hu, T., Lin, Z., Hu, T., & Lin, Z. (2001). Null Controllability—Continuous-Time Systems. Control Systems with Actuator Saturation: Analysis and Design