This advanced course will be focused on design methods for optimal and robust control. Major emphasis will be put on practical computational skills and realistically complex problem assignments.
The unifying concept is that of minimization of some optimization criterion. The properties of the resulting controller depend upon which criterion is minimized. Minimizing the popular integral-of-square-of criterion seeks a trade-off between a regulation error and a control effort. The modern theory introduces the concept of a system norm. Minimizing the H2 norm generalizes the classical LQ/LQG control. Minimizing the Hinf norm gives a controller which is robust (insensitive) to inaccuracies in the mathematical model of the system. The mu-synthesis is then an extension of Hinf methodology for systems with structured uncertainty. Hence robust control can be viewed as an offspring of the powerful paradigm of optimal control.
The presented optimization-based control design can be solved either offline, or online. In the latter case the optimization can be done by invoking some nonlinear programming solver in every sampling period. This is the essence of model predictive control, which will be briefly introduced in this course.
Also included in this course will be methods for time optimal and suboptimal control, which have already been found useful in applications with stringent timing requirements. In addition, semidefinite optimization and linear matrix inequalities will be introduced as these constitute a very flexible framework both for analysis and for numerical computation in robust control. Finally, computational methods for reduction of model and controller order will be covered in the course.
1. Motivation for optimal and robust control; Introduction to optimization: optimization without and with constraints of equality and inequality types, in finite and infinite-dimensional vector spaces.
2. Optimal control for a discrete-time LTI systems; discrete-time LQ-optimal control on a finite time horizon.
3. Discrete-time LQ-optimal control - extension from a finite to an infinite-time horizon; discrete-time algebraic Riccati equation (DARE).
4. Model predictive control.
5. Introduction to calculus of variations and its use for formulation and solution of an optimal control problem in continuous time.
6. Application of calculus of variations to solution of the continuous-time LQ-optimal control problem; continuous-time algebraic Riccati equation (CARE).
7. Optimal control with a free final time and with constrained control; Pontryagin principle.
8. Dynamic programing; application to derivation of LQ-optimal control problem.
9. LQG-optimal control (augmentation of an LQ-optimal state feedback with Kalman filter); robustification of an LQG controller using an LTR method; H2 optimal control as a generalization of LQ/LQG-optimal control.
10. Uncertainty and robustness; analysis of robust stability and robust performance.
11. Analysis of achievable control performance.
12. Design of a robust controller by minimizing the Hinf norm of the system: mixed sensitivity minimization, general Hinf optimal control problem, robust Hinf loopshaping, mu-synthesis.
13. Reduction of the order of the system and the controller.
14. Semidefinite programming and linear matrix inequalities in control design.
Some exercises (mainly those at the beginning of the semester) will be dedicated to solving some computational problems together with the instructor and other students. The other exercises will be used by the students to work on the assigned (laboratory) projects.