Solution set of all LQR problems smaller than all stabilizing controllers?

329 Views Asked by At

As far as my knowledge, the LQR problem is solvable for a linear state space system (A, B, C, D) if the system is stabilizable with respect to (A, B) and a few more technical conditions implicating that all unstable and marginally stable modes are reflected in the cost functional.

Furthermore, there are some guaranteed properties on gain and phase margins for the solution. (refer for example to 19.7 in https://ocw.mit.edu/courses/mechanical-engineering/2-154-maneuvering-and-control-of-surface-and-underwater-vehicles-13-49-fall-2004/lecture-notes/lec19.pdf)

This leads me to my questions:

  1. In my understanding, this implies that the set of all possible LQR controller (assuming all valid quadratic cost functionals) is clearly smaller than the set of stabilizing controllers. Can this be proven?

  2. Does this imply that there exists a set of suboptimal controllers which suffers from heavy state error compared to its control effort or the other way round? And are there situations where one would like to have such a controller? (i am aware of the fact, that optimality is always referred to a cost functional. But i question whether it might be interesting to design a controller which is not optimal with respect to ANY quadratic cost functional.)

3

There are 3 best solutions below

1
On

This is an incomplete answer. But this would have been to long for a comment.

It can be shown that a LQR always stays outside the unit circle in its Nyquist plot. Or in other words it has at least 60° phase margin and infinite gain margin (as upper bound and a half as lower bound).

I do not know if there exist systems whose margins always satisfy these bounds for any stabilizing state feedback. Unless all states are uncontrollable I would think that there would always a state feedback which has smaller margins. If this is true, then indeed the set of all possible LQR should be smaller then the set of all stabilizing controllers.

I am not sure about your second question.

0
On

actually i figured out some part of the answer myself in a quite long research yesterday evening. It wasn't that obvious too me before, please add some more detail on these thoughts if you can. I regard this question as quite fundamental for understanding optimal control from a control theory point of view instead of having the focus on optimization and numerics.

Regarding the second question:

Take the following system: $$ \dot{x} = -2x + u $$ There is a set of stabilizing controllers parametrized by: $$ u = ax $$ with $ a \in [0, 2) $.

These controllers are stabilizing but will never be optimal in the sense of a quadratic cost functional, as they add additional decay time and control effort is needed. This is a more precise statement of what i meant with 'heavy state error compared to its control effort'.

In my opinion, this is a clear argument to stick to LQR controller design for practical systems if stability is more important than dynamics shaping.

My thoughts were inspired by studying 'INVERSE OPTIMALITY IN ROBUST STABILIZATION' by Freeman and Kokotovic. They have actually a quite nice example where this is generalized to a nonlinear system.

0
On

LQR is optimal in the infinite horizon(!) and with the choices you have given in the cost function. That's a big difference and LQR controllers in practice are hardly ever optimal since not all states are available for measurement. Then you add observers and you are back to square one. There can always be a better nonquadratic cost function that defines a better controller than the LQR or simply infinite cost but still stabilizing. Hence the answer to the first question is always yes. But solving the optimization problem is much easier with quadratic cost functions.

The second question is not well formulated. There is no relation between the steady state error behavior and optimality. Think of a PD controller with a fixed steady state error which can be much better than your LQR in terms of performance. It can be also be the result of an LQ problem but you need to find the cost function to obtain it.