Can anyone knowledgable elaborate on what exactly is a $H^\infty$ norm and why it is used in control theory instead of some other norms?
Thanks!
Can anyone knowledgable elaborate on what exactly is a $H^\infty$ norm and why it is used in control theory instead of some other norms?
Thanks!
Copyright © 2021 JogjaFile Inc.
The $H_\infty$ norm of a system $G(s)$ is defined as
$$\lVert G(s) \rVert_\infty = \sup_\omega \lVert G(j \omega) \rVert_2 = \sup_\omega \bar{\sigma} (G(j \omega))$$
where $\lVert \cdot \rVert_2$ is the induced 2-norm or equivalently maximum singular value (which is denoted by $\bar{\sigma}$). Intitutively, this norm gives the maximum amplification of an input signal at the output that is caused by the system over all possible input signals.
Generally, the system $G(s)$ is selected to be a transfer function between exogenous inputs, i.e. the inputs we have no control over like reference, disturbance, noise, parameter uncertainty, etc., and error signals that we want to minimize like tracking error, etc. So, in this case the $H_\infty$ norm of this system is the worst-case effect of unwanted signals over error signals.
$H_\infty$ Optimal Control Theory is interested in finding a stable dynamic feedback controller that produces control signal using the measured outputs which minimizes the $H_\infty$ norm of the system between unwanted signals and error outputs and also stabilizes the overall system. This is equivalent to minimize the effects of unwanted signals at worst-case, i.e. over all possible inputs, while stabilizing the system. So, it is considered a robust control technique.
Edit: There are other norms that are used in control theory as well. One of the common norm is $H_2$ norm, which is defined as
$$ \lVert G(s) \rVert_2 = \left( \frac{1}{2 \pi} \int_{-\infty}^\infty \operatorname{tr} \left( G^T(-j \omega) G(j \omega) \right) d\omega \right) ^ {1/2}$$
Intuitively, this gives the average amplification of a signal that is caused by the system over all possible signals. This norm is particularly useful when the noises are assumed to be white, i.e. stochastic processes with Gaussian distribution. Linear Quadratic Gaussian (LQG) Optimal Control Problem deals with finding a stable feedback controller that also stabilizes the system while minimizing $H_2$ norm of the system between noise and error with the assumption of white noise inputs.