What are some good books or online resources for learning about optimization and probability focused control theory?

289 Views Asked by At

A mathematician I was speaking to recently mentioned that a lot of the newest control theory relies mostly on optimization and probability theory (particularly stochastic processes), rather than the complex analysis upon which it used to be focused.

Could someone recommend books/resources for this more modern control theory? Also, is knowing about the old type necessary to learn about more modern control theory?

I know there are some other questions here requesting books for control theory, but I'm not sure which ones are about old control theory, and which are about the new type (so information on what terms to search for would also be helpful).

2

There are 2 best solutions below

0
On

Small contribution to the answer; Reinforcement learning is the bomb in these days. It is studied by a lot of different disciplines because it is a very general concept. Using learning techniques one could learn a control system how to steer itself. For instance, see this robot which learned itself to walk, https://www.youtube.com/watch?v=SBf5-eF-EIw.

These type of techniques rely a lot on stochastic process and statistical theory. Some wikipedia pages; https://en.wikipedia.org/wiki/Markov_decision_process https://en.wikipedia.org/wiki/Dynamic_programming (heavily used in reinforcement learning and originally invented by a control theorist Bellman) https://en.wikipedia.org/wiki/Reinforcement_learning

0
On

Stephen Boyd, Craig Barratt, Linear Controller Design: Limits of Performance, Prentice-Hall, 1991.

The main topic of the book is closed-loop design and the computation of performance limits using convexity. The book introduces a standard framework for the control design problem and describes many practical design specifications in this framework. It is shown that many of these specifications are closed-loop convex; the corresponding control design problems can therefore be cast as infinite-dimensional nondifferentiable convex optimization problems. The book shows how these problems can be solved using the ellipsoid and cutting-plane algorithms, using a Ritz approximation.

The book describes how the achievable performance can be computed numerically for any family of closed-loop convex specifications. Closed-loop convex specifications include, for example, H-two, H-infinity, or l-one norm bounds, entropy bounds, step response envelopes and asymptotic command decoupling.


Richard M. Murray, Optimization-Based Control, February 2010.

These notes serve as a supplement to Feedback Systems by Åström and Murray and expand on some of the topics introduced there. Our focus is on the use of optimization-based methods for control, including optimal control theory, receding horizon control and Kalman filtering. Each chapter is intended to be a standalone reference for advanced topics that are introduced in Feedback Systems.