As far as my knowledge, the LQR problem is solvable for a linear state space system (A, B, C, D) if the system is stabilizable with respect to (A, B) and a few more technical conditions implicating that all unstable and marginally stable modes are reflected in the cost functional.
Furthermore, there are some guaranteed properties on gain and phase margins for the solution. (refer for example to 19.7 in https://ocw.mit.edu/courses/mechanical-engineering/2-154-maneuvering-and-control-of-surface-and-underwater-vehicles-13-49-fall-2004/lecture-notes/lec19.pdf)
This leads me to my questions:
In my understanding, this implies that the set of all possible LQR controller (assuming all valid quadratic cost functionals) is clearly smaller than the set of stabilizing controllers. Can this be proven?
Does this imply that there exists a set of suboptimal controllers which suffers from heavy state error compared to its control effort or the other way round? And are there situations where one would like to have such a controller? (i am aware of the fact, that optimality is always referred to a cost functional. But i question whether it might be interesting to design a controller which is not optimal with respect to ANY quadratic cost functional.)
This is an incomplete answer. But this would have been to long for a comment.
It can be shown that a LQR always stays outside the unit circle in its Nyquist plot. Or in other words it has at least 60° phase margin and infinite gain margin (as upper bound and a half as lower bound).
I do not know if there exist systems whose margins always satisfy these bounds for any stabilizing state feedback. Unless all states are uncontrollable I would think that there would always a state feedback which has smaller margins. If this is true, then indeed the set of all possible LQR should be smaller then the set of all stabilizing controllers.
I am not sure about your second question.