Control theory: what is an unstable pole

6.1k Views Asked by At

Can someone provide a definition of unstable pole as appears in control theory?

I wish to know if the definition of an unstable pole include the case when the pole is at origin i.e. $s = 0$

Multiple references exist for so called "unstable pole", which is a common term thrown around in control

  1. https://en.wikipedia.org/wiki/Nyquist_stability_criterion

  2. https://arxiv.org/pdf/1207.6962v1.pdf

  3. http://wolfweb.unr.edu/~fadali/ee472/FiniteSettlingDesign.pdf

  4. http://www.mathworks.com/help/mpc/ug/explicit-mpc-control-of-an-aircraft-with-unstable-poles.html

  5. http://www.phoneoximeter.org/uploads/media/EECE460_Fundamental_Limitations_01.pdf

But there is not a single definition for this term.

Sometimes the author uses "right half plane pole" in place of unstable pole. But what is a right half plane? Does it include the imaginary axis or not?

3

There are 3 best solutions below

3
On BEST ANSWER

First, consider the following first order transfer function:

$$ \frac{X(s)}{U(s)} = \frac{a}{s - a} $$

where $a \in \mathbb{C}$ is the system pole. If we observe the behavior of the system in time we have

$$ \dot{x}(t) = e^{a t} (u(t) - x(t))$$

Since $ a $ is complex we can write it as $ a = b + j c $ where $ b $ is the real part of $ a $ and $ c $ the imaginary part. Then the system becomes:

$$ \dot{x}(t) = e^{b t} e^{j c t} (u(t) - x(t))$$

Note that $ e^{j c t} $ will cause the system to oscillate, while $ e^{b t} $ will determine how (and if) the $x$ will converge to $u$.

If $ b < 0 $ the system will go to zero since $ e^{b t} \rightarrow 0 $ when $ t \rightarrow \infty $. Meanwhile, if $ b > 0 $ the system will diverge since $ e^{b t} \rightarrow \infty $ when $ t \rightarrow \infty $.

Note that $ c $ does not play a role here. So independently of the imaginary part, the real part of the pole needs to be negative for stability.

For the case $ b = 0 $ the system will neither converge nor diverge, however stability is defined by strict convergence, so $ b = 0 $ is not stable (attention to the choice of words, it may or may not be unstable, I recommend you to read more on marginal stability for this).

0
On

The simplest and most effective concept of stability is bounded-input bounded output stability: a system is BIBO stable its output is bounded for every bounded input. With this definition a transfer function with a pole on the imaginary axis, including the origin, is unstable.

Other definitions exist. They make things more complicated, without clarifying much.

0
On

I learned my classical control theory (control theory using frequency domain techniques) from "Control Systems Engineering" By Norman S. Nise. Below is a summary of my personal notes, tailored to your question:

Firstly, S is a general complex number. It is used in the Laplace transform, which is only used in linear time-invariant (LTI) system analysis. Taking the Laplace transform of the system's impulse response results in the system's transfer function. The transfer function completely characterizes a system and from it we can determine whether the system is stable or unstable.

A system's response to an input can be considered to consist of two components. These are the forced response (analogous to the particular solution in a differential equations sense, and resulting from the system's input) and the natural response (analogous to the homogenous solution in a differential equations sense).

A system may be considered stable if its natural response approaches zero as time approaches infinity. Its natural response decaying to zero would be the result of the system's poles having real parts existing in the left hand side of the complex plane (having negative real parts). It causes the time domain response to consist of decaying exponential functions.

A system may be considered unstable if its natural response approaches infinity as time approaches inifinity. This is the result of the system's poles existing in the right hand side of the complex plane (having positive real parts). It causes the time domain response to consist of exponential growth functions.

A system may be considered marginally stable if its natural response neither grows or decays as time approaches infinity. That is, the response remains constant or oscillates. This covers the case where the pole are on the imaginary axis. If the poles are conjugates (and of multiplicity 1), then the response will purely sinusoidal (as a result of Euler's formula). If the pole is at S=0 (existing on the origin) the response will be the unit step function (this is characteristic of integrators).

Therefore, if S=0 is the transfer function, the system is marginally stable.

Turning to BIBO stability, if the transfer function of my system were 1/S, thereby having a pole at S=0, then it is easily shown that integrators are unstable. For example if you put in a constant value (lets say x(t)=1 for all time) then the system output would be a linear function, which approaches infinity with time.