S.Skogestad and I.Postlethwaite, "Multivariable feedback control", Wiley, 1996.

Checklist about the chapters

Read this after you have gone through a chapter, to see that you have understood and covered the main points.


First a comment about the frequency domain and time domain. This book covers both, with an emphasis on the frequency domain and Laplace transforms because of the invaluable insights this gives into feedback control. I often get the question, what is the relationship between the frequency and time domain? For example, "if a system has a better (smaller) peak for a closed-loop transfer function (like mu for Robust Performance), does it nessecerly have better time response (for instance settling time and overshoot)?" The scientific answer in terms of exact results is no, but for most practical purposes the answer is yes.

The reason for the ambiguity is that the relationship between the frequency and time domain is through integral transforms. For example, look at the definition of the Laplace transform in (4.12) which with s=jw is the Fourier transform (frequency response). Thus, there is not generally a direct relationship between specific properties of the frequency response (like the peak or norm) and a specific time response (e.g. a step response).

However, there are a few more solid relationships, for example, see section 2.4.4 for the relationship between the time domain oscillations for the setpoint step response as expressed by the Total Variation (TV) and the peak MT of T. Some norm relationships between the two domains are given in Tables A.1 and A.2.

Finally, note that the frequency response may actually be interpreted as the time response to persistent sinusoidal changes. This is exact! For example, see equations (2.1) and (2.2).

Chapter 1. INTRODUCTION

1.1: This book covers more than the traditional controller design and control system analysis (steps 9-10).

1.2 Note defs of NS,NP,RS and RP

1.3 Note definition of proper, strictly proper, etc.

1.4 SCALING!! Very IMPORTANT in applications, both for model analysis (input-output controllability) and for controller design. Main idea: Variables are dimensionless with 1 as maximum value.

With scaling you make initial decision regarding performance. This makes weight selection simple later (may often select identity weights if initial scaling is reasonable!)

1.5 Linear models. Everything in this book is based on linear models which are often obtained by numerical differentiation of first-principle model. This is illustrated by an example (room heating).

1.6 Notation. Most important notation: G,K,L,S and T=I-S.

Alternative implementations of two degrees-of-freedom controller: see Fig. 4.5.

General control configuration: P and K (will get back to it in Chapter 3), see for example Fig. 3.13 which shows one-degree-freedom control system in PK-structure. P is "generalized" plant

Generalized control system: Let all signals affecting the system (which we cannot do anything about) be collected in w (disturbances, noise, reference changes). Let the signals we want to be small be collected in z (typically control errors and manipulated inputs). The control objective is then: based on measurements and other signal information in v (including references), use a controller (algorithm) K to generate maniuplated input signals u, such that the norm from w to z is small (z remains small in spite of variations in w).

Note: A reference r can be included in both w (exogeneous signals) and v (sensed signals) and indirectly also in z (e.g. as y-r).

Question to the reader: What other signals can be included several places?

Chapter 2. CLASSICAL FEEDBACK CONTROL

This chapter is mostly intended for individual reading. It is a review on classical feedback control, except for the section on weighted sensitivity which should be read carefully. Most of the same ideas are used for MIMO systems later.

The main idea is to make the reader familiar with ideas, in particular from the frequency domain, which are crucial for understanding the behavior of MIMO systems.

Note that the "frequency range corresponding to the closed-loop bandwidth wB" generally is the most important. People with experience from the time domain only may wonder what value wB has. Essentially, if tau_cl is the closed-loop time constant, then wb = 1 / tau_cl (approximately). Here the "closed-loop time constant" tau_cl is approximately the time it takes from the system encounters an upset (disturbance, reference change) and until the output has covered 63% of its way back to or towards its (new) steady-state value.

Some points worth noting:

2.1 Non-mimimum phase systems are systems with time delays or RHP-zeros (inverse response for SISO system).

2.2 Feedback control. Note again the definition of L, S and T and how they are related to the closed-loop response.

2.4 Evaluating closed-loop performance. The time domain specifications in Fig. 2.9 are not used later in the book, mainly because of difficulties in formulating easily solvable mathematical problems.

Note maximum peaks of S and T, denoted Ms amd Mt - these are the H-infinity norms!

Note definitions of bandwidth (wB in terms of S) and gain crossover frequency (wc in terms of L).

2.6 Loop shaping. This is the classical trial-and-error approach where we at the end of the section show how the phase correction step can be "automized" by the Glover-McFarlane H-infinity loopshaping.

Note that for disturbance rejection we typically want |L|=|Gd| if we want to avoid using excessive inputs. For disturbances at the inputs, Gd=G, this means that we want a controller which is a constant gain, |K|=1 (no dynamics required for performance, but may need it for robustness, and may also want some integral action at low frequencies).

2.7 Shaping closed-loop transfer functions. A reasonable design specification is weighted sensitivity, p1= |wP S| < 1. To avoid excessive inputs we also want p2= |w_u KS| small . We combine these into sqrt(p1*p1 + p2*p2), see (2.78) which should be minimized. This is equivalent to a mixed sensitivity (S/KS) problem as discussed in Exercise 2.11.

*** Important *** Make sure you understand the meaning of the performance weight wp.

Weight which requires slope of -1 for L at lower frequencies:

    wp(s) =   s/M + wB  / s + wB A
    M = max peak on S
    wB = frequency where magnitude of S crosses 1
    A = steady-state offset
Study Fig. 2.26 and 2.27 carefully !!!!

****** Homework:

Use H-infinity loopshaping and H-infinity S/KS to design controllers for the disturbance process.

More precicely: Do Exercise 2.3 (p.33) AND redo Example 2.11. (also see the MATLAB files)

This will: 1. Give a good introduction to the software (MATLAB and mu-toolbox and robust control toolbox). 2. Introduce you to design methods which by introducing the max. singular value can be extended directly to MIMO systems.

Chapter 3. INTRODUCTION TO MULTIVARIABLE CONTROL

This chapter provides an introduction to MIMO systems and MIMO feedback control. It may form the basis for a first course on multivariable control.

Chapter 4. ELEMENTS OF LINEAR SYSTEM THEORY

There is a lot of important material in this chapter (although some of the details are of less importance).

Chapter 5. LIMITATIONS ON PERFORMANCE IN SISO SYSTEMS

Chapters 5 and 6 discuss inherent control limitations imposed by the plant G(s) which cannot be overcome by any controller. Try not to get swamped with all the details in this chapter. Look quickly through the material and try to understand the significance of the controllability rules.

Some key points:

Chapter 6. LIMITATIONS ON PERFORMANCE IN MIMO SYSTEMS

Note the similarity with Chapter 5. This is important material, but again try not to get lost in the details. Try to make a table of the main results in the two chapters - also to highlight the difference between MIMO and SISO. It is recommended that each student makes such a table (I think probably you learn the most from doing it yourself).

Here are some additional points to note:

For some good exercises to test your understanding of the material in this chapter, check out Problem 1 in the Trondheim-Exam from 1996 (and its solution).

Chapter 7. UNCERTAINTY AND ROBUSTNESS FOR SISO SYSTEMS

The main points are
   1. Why use freqeuncy-domain uncertainty (neglected dynamics, etc.)
   2. Additive uncertainty in Nyquist plot (frequency-domain)
   3. Multiplicative uncertainty. Typical shape of weight w_I
   4. Graphical derivation of RS-condition, |w_I T|<1.
   5. Graphical derivation of RP-condition, |w_IT| + |w_P S| < 1.
In the MIMO chapter we rederive this using M Delta- structure and structured singular value (mu).

Do some of the Exercises on page 249-251 !!

Chapter 8. ROBUST STABILITY AND PERFORMANCE ANALYSIS

Some main points:
  1. How to derive M: Read Example 8.4 on p. 301 carefully.
  2. How to test RS using M-Delta structure (SEc 8.1 and 8.5).
  3. For convex set of perturbations (Delta's) (Sec. 8.5):
      a. For real/complex Delta: 
            RS iff det(I - M Delta) neq 0, forall Delta, forall w
      b. For complex Delta only:      
            RS iff rho(M Delta) <1 forall w
  4. Delta full matrix (Sec. 8.6): 
       SISO:  RS iff |M|<1 forall w       (iff means "if and only if")
       MIMO:  RS iff smax(M) < 1 forall w
         (Sufficiency is obvious from small gain theorem,
          necessity follows since any phase in Delta is allowed and any
          direction in Delta is allowed).
  5. Delta block diagonal (sec. 8.9): 
       By definition of mu: RS iff mu(M)<1
       Here mu(M) is defined as the inverse of the smallest smax(Delta)
       which makes det(I - M Delta) = 0.
  6. Upper bound smax(D M D^-1) on mu(M) is derived by (Sec. 8.7)
         i. EITHER: Noting that rho(M Delta) = rho(D M D^{-1} Delta) when
               D is such that    D \Delta D^-1 = Delta
         ii. OR: By using the idea of scaling in the M Delta block diagram
             as in the book on page 310.
  7. Sec. 8.8: Describes computation of mu and its bounds; rho and sigma
  8. Sec. 8.10: H-infinity robust performance (RP) is a special case of 
       RS. See  "block-diagram" proof on page 324.
  9. Summary of tests for NS, NP, RS and RP : see 8.10.2 on page 325 (!!)
Application (Section 8.11). It is shown how to:
1. Derive P, N and M for the case with input uncertainty and
   performance in terms of weighted sensitivity (see (8.29) and (8.123).

2. Derive requirements for NP, NP (hinfnorm = with full deltaP)
   RS and RP (see (8.115)-(8.118)).

3. Apply these requirements to the SISO case and rederive the results 
   from Chapter 7, (see (8.124)-(8.127).

Note that there are two main approaches to get a robust design:

  1. Make it robust to some general class of uncertainty. For SISO systems one may consider GM and PM or the peak of S or T. For MIMO systems the normalized coprime uncertainty provides a good general class, and the corresponding Glover-McFarlane H-infinity loopshaping design has proved useful in many applications.
  2. Model the uncertainty in detail and make the system robust with respect to it. This may require a large effort both in terms of uncertainty modelling (in particular for parametric uncertainty) and in terms of analysis and design (which involves the structured singular value).
In pratice, one will therefore often start with approach 1, and switch to approach 2 if the resulting control performance is not satisfactory.

APPENDIX A: MATRIX THEORY AND NORMS

Appemdix A gives an overview of matrix theory and norms.

Note in particular the eigenvalue decomposition:

A = T Lambda T^-1 (T contains the eigenvectors, Lambda the eigenvalues)

and the (for us more useful) singular value decomposition (SVD)

A = U Sigma V^H (U and V contain the output and input singular values)

The first (maximum) element in the diagonal matrix Sigma is the maximum singular value, which is equal to the induced 2-norm of the matrix A. Note that the columns in U and V are orthonormal, but that this is not the case for the eigenvectors in T.

Norms are used to generalize the concept of gain, and are extensively discussed in the Appendix.

The Appendix also lists the properties of the RGA-matrix

Lambda(A) = RGA(A) = A x A^-T (x denotes element-by-element multiplication)

Note that Lambda is here used for the RGA-matrix (and NOT the eigenvalue matrix). In the book Lambda (as a matrix) always means the RGA-matrix, and lambda_ij (as a scalar with two indices) refers to the ij'th element in the RGA-matrix, whereas lambda_i (as a scalar with one index) referes to the i'th eigenvalue.