#### S.Skogestad and I.Postlethwaite, "Multivariable feedback control", Wiley, 1996.

Last updated 19 Mar 1997

Read this after you have gone through a chapter, to see that you have understood and covered the main points.

### Chapter 1. INTRODUCTION

1.1: This book covers more than the traditional controller design and control system analysis (steps 9-10).

1.2 Note defs of NS,NP,RS and RP

1.3 Note definition of proper, strictly proper, etc.

1.4 SCALING!! Very IMPORTANT in applications, both for model analysis (input-output controllability) and for controller design. Main idea: Variables are dimensionless with 1 as maximum value.

With scaling you make initial decision regarding performance. This makes weight selection simple later (may often select identity weights if initial scaling is reasonable!)

1.5 Linear models. Everything in this book is based on linear models which are often obtained by numerical differentiation of first-principle model. This is illustrated by an example (room heating).

1.6 Notation. Most important notation: G,K,L,S and T=I-S.

Alternative implementations of two degrees-of-freedom controller: see Fig. 4.5.

General control configuration: P and K (will get back to it in Chapter 3), see for example Fig. 3.13 which shows one-degree-freedom control system in PK-structure. P is "generalized" plant

Generalized control system: Let all signals affecting the system (which we cannot do anything about) be collected in w (disturbances, noise, reference changes). Let the signals we want to be small be collected in z (typically control errors and manipulated inputs). The control objective is then: based on measurements and other signal information in v (including references), use a controller (algorithm) K to generate maniuplated input signals u, such that the norm from w to z is small (z remains small in spite of variations in w).

Note: A reference r can be included in both w (exogeneous signals) and v (sensed signals) and indirectly also in z (e.g. as y-r).

Question to the reader: What other signals can be included several places?

### Chapter 2. CLASSICAL FEEDBACK CONTROL

This chapter is mostly intended for individual reading. It is a review on classical feedback control, except for the section on weighted sensitivity which should be read carefully. Most of the same ideas are used for MIMO systems later.

The main idea is to make the reader familiar with ideas, in particular from the frequency domain, which are crucial for understanding the behavior of MIMO systems.

Note that the "frequency range corresponding to the closed-loop bandwidth wB" generally is the most important. People with experience from the time domain only may wonder what value wB has. Essentially, if tau_cl is the closed-loop time constant, then wb = 1 / tau_cl (approximately). Here the "closed-loop time constant" tau_cl is approximately the time it takes from the system encounters an upset (disturbance, reference change) and until the output has covered 63% of its way back to or towards its (new) steady-state value.

Some points worth noting:

2.1 Non-mimimum phase systems are systems with time delays or RHP-zeros (inverse response for SISO system).

2.2 Feedback control. Note again the definition of L, S and T and how they are related to the closed-loop response.

2.4 Evaluating closed-loop performance. The time domain specifications in Fig. 2.9 are not used later in the book, mainly because of difficulties in formulating easily solvable mathematical problems.

Note maximum peaks of S and T, denoted Ms amd Mt - these are the H-infinity norms!

Note definitions of bandwidth (wB in terms of S) and gain crossover frequency (wc in terms of L).

2.6 Loop shaping. This is the classical trial-and-error approach where we at the end of the section show how the phase correction step can be "automized" by the Glover-McFarlane H-infinity loopshaping.

Note that for disturbance rejection we typically want |L|=|Gd| if we want to avoid using excessive inputs. For disturbances at the inputs, Gd=G, this means that we want a controller which is a constant gain, |K|=1 (no dynamics required for performance, but may need it for robustness, and may also want some integral action at low frequencies).

2.7 Shaping closed-loop transfer functions. A reasonable design specification is weighted sensitivity, p1= |wP S| < 1. To avoid excessive inputs we also want p2= |w_u KS| small . We combine these into sqrt(p1*p1 + p2*p2), see (2.78) which should be minimized. This is equivalent to a mixed sensitivity (S/KS) problem as discussed in Exercise 2.11.

*** Important *** Make sure you understand the meaning of the performance weight wp.

Weight which requires slope of -1 for L at lower frequencies:

```    wp(s) =   s/M + wB  / s + wB A
M = max peak on S
wB = frequency where magnitude of S crosses 1
```
Study Fig. 2.26 and 2.27 carefully !!!!

****** Homework:

Use H-infinity loopshaping and H-infinity S/KS to design controllers for the disturbance process.

More precicely: Do Exercise 2.3 (p.33) AND redo Example 2.11. (also see the MATLAB files)

This will: 1. Give a good introduction to the software (MATLAB and mu-toolbox and robust control toolbox). 2. Introduce you to design methods which by introducing the max. singular value can be extended directly to MIMO systems.

### Chapter 3. INTRODUCTION TO MULTIVARIABLE CONTROL

This chapter provides an introduction to MIMO systems and MIMO feedback control. It may form the basis for a first course on multivariable control.
• #### 3.1 Introduction

A plant G is INTERACTIVE if its offdiagonal elements are nonzero. It is one-way interactive if it can be rearranged to a upper or lower triangular matrix.

For SISO systems the GAIN |Gd|/|d| is independent of the input magnitude.

Also for MIMO systems the GAIN ||Gd|| / ||d|| (where ||d|| is some norm) is independent of the magnitude ||d||, but it DOES depend on its direction, see Section 3.3.1.

A plant is said to be ILL-CONDITIONED if the gain depends strongly on the input direction. It is quantified by the condition number gamma(G) (which is much larger than 1 for an ill-conditioned plant).

• #### 3.2 Transfer functions

For MIMO systems the order of the transfer functions matter, so in general GK is not the same as KG (even when G and K are square matrices).

The main rule for evaluating transfer functions is: Start from the output and write down the transfer functions as you meet them going to the input. If you exit a feedback loop then we get a term (I-L)^-1 where L is the transfer function around the loop (gain going backwards).

Note that the order (left to right) in the block diagram and the formulas is reversed!

The rules etc. given in this section are important and should be memorized!

• #### 3.3 MIMO frequency response

Essentially the same as for SISO systems except for the issue of directions.

Make sure you understand the singular value decomposition.

```   G = U Sigma V^H
V: columns give input directions
U: columns give output directions
Sigma: Gains relating these directions
```
The largest singular value (smax) is the induced 2-norm.

Condition number (gamma): max. singular value / min. singular value

For a square matrix, gamma = smax(G) smax(G^-1), so the condition number is large if the product of the largest element in G and G^-1 is large.

Note that the condition number depends strongly on scaling. It may be scaled (minimized) at the input or output or both.

Relative Gain Array (RGA) for square G:

`        RGA = Lamda(G) = G x G^-T   `
Useful measure - easy to compute. The difference between RGA and I (the identity matrix) says something about two-way interactions. But the RGA also has many other uses! Note that our definition says nothing about decentralized control (although the RGA is useful for decentralized control).

We have that || RGA ||_sum is approximately equal to the minimized condition number.

```****** Example MATLAB (Distillation process):

G0 = [87.8 -86.4; 108.2 -109.6];
[U,sval,V]=svd(G0)           % or use: [U,sval,V]=vsvd(G0) for G freq.-varying
rga = G0.*pinv(G0.')         % or use: rga = vrga(G0) - see function vrga
rgasum = sum(sum(abs(rga)))  % sum-norm of RGA (sum of elements)
gamma = cond(G0)             % or use: gamma = vcond(G) for G freq.-varying
gammamin = condmin(G0)       % see function condmin
gammamini = condmini(G0)     % see function condmini
gammamino = condmino(G0)     % see function condmino
```
• #### 3.4 Control of MIMO plants

A few "simple-minded" approaches to MIMO control:
```1. Decentralized control; K is diagonal

o Does nothing about interaction - poor performance (even nominally)
o But normally robust
o May often tune on-line (little modelling effort)
o Important issue: Select pairing (to achieve G diagonally dominant)

2. Compensator approaches and decoupling

o First make something simple which takes care of interactions
o Then design diagonal controller for performance etc.
o Decoupler: E,g. inverse -based controller
K = l G^{-1}
Often sensitive to uncertainty, Also: Decoupling
may not be required, e.g. for disturbance rejection.
o SVD-controller: More flexible. Usually based on SVD of G
around crossover frequency.
```
• #### 3.5 Introduction to MIMO RHP-zeros

This section is just an introduction so it is not necessary to read it in detail.

Square MIMO plant: Poles and zeros are in most cases poles and zeros of det G(s).

NOTE: Poles are essentially poles of elements of G, but generally the zeros are NOT the zeros of elements of G !!

Right half plane (RHP) zero: Fundamental limitation on achievable control (as for SISO). Why? G(s)^-1 is unstable.

MIMO pole and zeros have directions.

```    Poles: Look at SVD of G(p): Pole direction is direction where G(p) is
infinite.
Zeros: Look at SVD of G(z). Zero direction is where gain of G(z) is
zero (i.e. it goes to zero for s=z).
Numerical calculation: Use state-space form, see ch. 4 ```
See 2x2 example with H-infinity design: Note that we can MOVE effect of RHP-zero to particular outputs (unless zero is pinned). Try it yourself with the MATLAB-file Expl3_8.m !
• #### 3.6 Condition number and RGA

These are very useful measures. Study this carefully. Also make sure you have read through most of the Appendix at this point.
• #### 3.7 Introduction to robustness for MIMO plants

This section just an introduction so it is not necessary to read it in detail.

##### Example 1: Spinning satellite
Usual stability margins (GM, PM) one loop at a time: Looks OK BUT singular value of S and T has peak around 10. This ALWAYS signals a robustness problem (and performance is alo poor).

See Matlab file Sec3_7_1.m

• ##### Example 2: Distillation process
Here usual margins are OK + there is NO large peak on S and T. Still poor robustness with input uncertainty, see simulation. Whats wrong? The input uncertainty moves large input signals over to direction where plant has large gain.

NOTE: We use an inverse-based controller. This problem CANNOT happen with a diagonal controller with no peak on S or T.

See Matlab file Sec3_7_2.m

• #### 3.7 General control problem formulation

Important section!

Note that we can handle non-square plants, we need not measure all the outputs, etc. etc.

Note that the general formulation (P) has already been used in the files for H-infinty design (S/KS).

May use sysic in the MATLAB mu-toolbox to generate P.

Do Exercise 3.10 carefully (p. 104)

Also do the following: Exercise 3.15, 3.16, 3.17, 3.18, 3.19, 3.20, 3.21, 3.23, 3.24

### Chapter 4. ELEMENTS OF LINEAR SYSTEM THEORY

There is a lot of important material in this chapter (although some of the details are of less importance).
• #### 4.1 System descriptions

For linear systems there are several alternative system representations. The state-space representation (4.1.1) often follows directly from a physical model, and is used in most numerical calculations. The transfer function representation (4.1.3) is a nice compact representation which yields invaluable insights; it allows for series connections to be represented by multiplication of transfer functions. It also leads directly to the frequency response (4.1.4). You do not need to remember the details about the coprime factorization (4.1.5), but you should know that it is a factorization into two stable systems, and that it is useful for representing the class of all stabilizing controllers, and it forms the basis for the very useful coprime uncertainty description.

• #### 4.2 State controllability and state observability

These are important system theoretical concepts which you should know the meaning of. Note that modes which are note state controllable and/or state observable disappear in a minimal realization of the system (a representation with the fewest number of states).

There are many ways to check for state controllability and observability, e.g. with Gramians etc. The method which yields the most insight is probably to compute the input and output directions associated with each pole (mode):

Let t and q be the right and the left eigenvectors of the state matrix A.

```      At = pt,  q^H A = p q^H

then the pole output and input directions are

y_p = Ct, u_p = B^H q                 (4.69)
```
The mode (pole) p is not state observable if and only if (iff) y_p=0, and the pole (pole) p is not state controllable iff u_p=0. In other words, a system is state controllable iff y_p is nonzero for all p, and a system is state observable iff u_p is nonzero for all p.

In MATLAB we use the following commands:

```   [T,P] = eig(A)
YP = C*T
[Q,P] = eig(A')
UP = B'*Q
```
Then the columns in YP are the output pole directions, and the columns in UP are the input pole directions. As an example consider the following SISO system with two states:
```   A = [-2 -2; 0 -4]; B = [1 ; 1]; C = [1 0];
which has poles (eigenvalues) at -2 and -4. We get using the commands above
YP = [1  0.71],   UP = [0  1 ]
(make sure the order in YP and UP is consistent with the order of the poles in P)
```
Thus, from YP both modes are state observable from the output, but from UP only the second mode (the state at -4) is state controllable from the input. A minimal realization of the system thus has only one state, as is easily verified by computing the transfer function
`    G(s) = C (sI-A)^-1 B = 1/(s+4). `

Note Example 4.5 on state controllability of "bath tubs" in series. This shows that a system may be "state controllable", but still not be "controllable" in any practical sense. In the bath tub example, we have only one input (the inlet temperature T0), but we want to control four outputs (T1, T2, T3, T4). Independent control of 4 outputs using 1 input is clearly not possible (the plant is not "functional controllable", see section 6.3, p. 218).

• #### 4.4 Poles

The poles are essentially the sum of the poles in the elements of the transfer function, but to get the correct multiplicity a more careful analysis is needed, and Theorem 4.2 is useful for hand calculation when we know G(s).

• #### 4.5 Zeros

The zeros are values of s for which G(s) looses rank (see Def. 4.7). In general, there is no relationship between the elements of the transfer function and its (multivariable) zeros. Theorem 4.3 is useful for hand calculation when we know G(s).

• #### 4.6 More on poles and zeros

There are many important things in this section, especially read carefully the small print remarks (4.6.2). Here are some things to note:
1. For a square system G(s), the poles and zeros are essentially the poles and zeros of det G(s). However, this simple method can fail because of pole-zero cancellations which actually are in different parts ("directions") of the system. For example,
```            [ (s+2)/(s+1)        0      ]
G(s) =   [      0        (s+1)/(s+2) ]
```
has a pole and zero at -1 and a pole and zero at -2, even though det G(s) = 1.
2. If G^-1(s) exists, then the poles of G(s) are the zeros of G^-1(s) and reverse (as for SISO systems).
3. A system with all the states as outputs has no zeros. This explains why zeros where almost forgotten in the highdays of state-space theory from about 1965 to 1985. (But note that there may be zeros from the inputs u to the controlled outputs y, and if these are in the right half plane, then these pose fundamental difficulties, even if we measure all the states).
4. Most systems do have zeros, see (4.73)
5. Also non-square systems may have zeros; consider carefully item 13 on page 136.
6. Moving poles and zeros. As a basis consider G(s)
• Feedback, G(I+GK)^-1. Poles (of G) are moved and zeros (of G) are unchanged (in addition we get as zeros the poles of K).
• Series, GK. Poles and zeros are unchanged (with the exception of possible cancellations between poles and zeros in G and K).
• Parallell, G+K. Poles are unchanged, zeros are moved (but note that physically a parallell interconnection requires an additional manipulated input).

• #### 4.7 Internal stability of feedback systems

The main lesson is: Avoid any cancellations of unstable (RHP) poles between G and K.

Otherwise, the system will always be internally unstable (it may seem stable when viewed from one particular input to a particular output, but there will some hidden unstable mode, which will appear in some other input-output transfer function).

Also note that it does matter whether the cancellation is perfect or not - any cancellation between two separate physical blocks, in between which there are physical signals, will eventually lead to some signal "blowing up" somewhere.

To be sure that a feedback system consisting of G and K is (internally) stable, you must then consider all the 4 closed-loop transfer function involving inputs and outputs between the two blocks (Theorem 4.4)

However, if you know there are no pole-zero cancellations between G and K, then you need only check one of them, e.g. S = (I+GK)^-1 (Theorem 4.5).

From the rather simple requirement that unstable pole-zero cancellations are disallowed, some quite powerful conditions can be derived, see section 4.7.1. For example,

• If G has a RHP-zero at z, then so has also T (and since T is the closed-loop transfer function from r to y, we see immediately that this imposes a fundamental limitations on performance - see Chapters 5 and 6 for more details). The following interpolation constraints then follow (which generalize (4.83)):
```              y_z^H T(z) = 0,   y_z^H S(z) = y_z^H   (6.4)
```
where y_z is the output zero direction. In words, T(z) must be zero in the output direction y_z, and since S=I-T, S(z) must be one in this direction.

• If G has a RHP-pole at p, then S has a RHP-zero at p (the performance implications of this is not so clear; essentially it requires tight control at s=p, which may be be a serious limitation if there is something else, such as a RHP-zero or input saturation, which makes this difficult). The following interpolation constraints then follow:
```              S(p) y_p = 0 ,    T(p) y_p = y_p       (6.5)
```
where y_p is the output pole direction.

• #### 4.8 Stabilizing controllers

This is the Q-parameterization where K(Q) yields all stabilizing controllers when Q is varied freely over all stable Q(s).

This may have significant advantages in controller synthesis where the objective is to a find a K which minimizes some norm of N(K):

1. The search over stabilizing K's (which involves checking the stability of closed-loop transfer functions) is replaced by a search over stable Q.
2. The closed-loop transfer functions turn out to be affine in Q, e.g. S or T can be written H1 + H2 Q H3, i.e., it is affine in Q, which may significantly simplify the optimization (e.g. compared to GK(I+GK)^-1 which is fractional in K).

Note that the IMC-structure in Fig. 4.5 which yields a parameterization of all stable controllers for a stable plant G. The feedback signal in the IMC-structure is the difference between the predicted and actual (measured) output (=d_y), so Q may be designed in an open-loop fashion similar to the design of a feed-forward controller (this is used in the IMC design procedure).

• #### 4.9 Stability analysis in the frequency domain

Let Pol denote the number of unstable poles in L(s) = G(s)K(s), and consider a negative feedback system. The Nyquist stability condition is a way of checking the closed-loop stability, e.g. of (I+L)^-1, by considering only the open-loop transfer function L.

For a SISO system the Nyquist stability says that the system is stable iff (if and only if) L(jw) makes Pol anti-clockwise encirclements of the point -1, which is equivalent to requiring that 1+L(jw) makes Pol anti-clockwise encirclements of the origin.

The MIMO Nyquist stability criteria says that the last statement holds also for MIMO systems if we consider det(I+L):

• Theorem 4.7 The closed-loop system is stable iff det(I+L(jw)) makes Pol anti-clockwise encirclements of the origin.

The Small gain theorem for SISO system says that the system is stable if |L(jw)| < 1 at all frequencies w. This is clearly a very conservative condition as no phase information is taken into account. For example, L(s)=k/s+e gives a stable closed-loop system for all values of k, whereas we from the small gain theorem need to require k < e to guarantee stability.

The "tightest" (least conservative) generalization of the small gain theorem to MIMO systems is the spectral radius stability condition (Theorem 4.9), which says that the system is closed-loop stable if rho(L(jw)<1 at all frequencies w.

This may be understood as follows: Recall that the spectral radius rho is the largest eigenvelue magnitude, rho = max_i | lambda_i |. The signals which "return" in the same direction after "one turn around the loop" are magnified by the eigenvalues lambda_i (and the directions are the eigenvectors x_i):

```      L x_i = lambda_i x_i.
```
So if all the eigenvalues lambda_i are less than 1 in magnitude, all signals become smaller after each round, and the closed-loop system is stable.

### Chapter 5. LIMITATIONS ON PERFORMANCE IN SISO SYSTEMS

Chapters 5 and 6 discuss inherent control limitations imposed by the plant G(s) which cannot be overcome by any controller. Try not to get swamped with all the details in this chapter. Look quickly through the material and try to understand the significance of the controllability rules.

Some key points:

• The definition of (input-output) controllability used in this chapter is as follows:

A plant is controllable if we at each frequency can keep the control error e = y- r less than 1, for any disturbance d with magnitude less than 1 (and in particular for |d|=1), and for any reference change r with magnitude less than R (and in particulat for |r|=R), using manipulated inputs u which are less than 1 in magnitude.

This definition assumes that the variables have been scaled as outlined in Chapter 1, page 6. SCALING IS IMPORTANT !

• In this chapter we do not really distinguish between the bandwidth frequency wB (where |S| goes up to 0.707) and the gain crossover frequencu wc (where |L| crosses 1).
• A lot of insight into the inherent limitation imposed by the properties of the plant (its "controllability") can be obtained by considering the idealized case of perfect control, where we must have u = G^-1 r - G^-1 G_d d; see p. 163-164.
• First waterbed formula: If you push |S| down somewhere to get good performance, it will peak up somewhere else
• Second waterbed formula: Peak is even higher for plants with RHP-zeros
• RHP-zeros z are bad! (we show this in many ways...). We must have T(z)=0, so we cannot have tight control close to the RHP-zero.
• For RHP-poles p we must have T(p)=1, so we must have tight control close to the RHP-pole (the opposite of a RHP-zero!) Thus, RHP-poles may be bad, especially if combined with RHP-zeros, see (5.49), or if combined with input saturation, see (5.61).
• Fast control is required to reject large disturbances (with a large |Gd|). This may be inconsistent with the presence of RHP-zeros and time delays, see (5.53) and (5.75).
• Large input signals is required to reject large disturbances, which may be inconsistent with input saturation, see (5.59).
• Feedforward control is sensitive to disturbances, especially if |Gd| is large (say, larger than 5-10) at some frequency.
• Make sure you understand the summary on Figure 5.17, p. 199.
• Make sure you understand everything in the first application example, with first-order plus delay models, on p. 201-203.

### Chapter 6. LIMITATIONS ON PERFORMANCE IN MIMO SYSTEMS

Note the similarity with Chapter 5. This is important material, but again try not to get lost in the details. Try to make a table of the main results in the two chapters - also to highlight the difference between MIMO and SISO. It is recommended that each student makes such a table (I think probably you learn the most from doing it yourself).

Here are some additional points to note:

• The basis of many of the results in this chapter are the "interpolation constraints" (they are called so because they give restrictions on S(s) and T(s) in terms on some fixed values such as S(z) and S(p)), see p. 215.
• RHP-zero z. T(s) must have a RHP-zero at z, i.e., T(z) has a zero gain in the direction of output direction yz of the zero, and we get: yz^H T(z) = 0 and yz^H S(z) = yz^H.
• RHP-pole p. S(s) must have a RHP-zero (!) at p, i.e. S(p) has a zero gain in the input direction of the output direction yp of the RHP-pole, and we get: S(p) yp = 0 and T(p) yp = yp.
• Based on this we can generalize most of the SISO results, but directions must be taken into account.
• We define the angle between the pole and zero direction as phi = arccos | yz^H yp|. If phi = 90 degrees, then the pole and zero are in completely different directions and there is no interaction (they maybe considered separately). If phi = 0 degrees, then they interact as in a SISO system.
• p. 218-219: A plant is Functional controllable if its normal rank is equal to the number of outputs.
For example, a plant with fewer inputs than outputs is not functional controllable. For exampl, this applies to the "bath-tub" example in Chapter 4, see p. 124-125. In that example, we have only input (u=T0 - the inlet temperature) and four outputs (the four tank temperatures). These four outputs can not be controlled independently as functions of time, although we can achieve a given "point value" (so it is state controllable).
• p. 220: Note that a time delay in a single element may help (!) if it reduces the interactions. Obviously, the magnitude of the time delay which can be factored out from a given output is always bad for control of that output.
• p. 221: Note that the "bad" effect of a RHP-zero may be moved to particular output, unless the yz has a zero for that output. For example, if we have a RHP-zero with yz = [0.03 -0.04 0.9 0.43]^T, then one may in theory move the bad effect of the RHP-zero to any of the outputs (with the other outputs perfectly controlled). However, in practice, in will be difficult to avoid the effect of the RHP-zero on output 3, because the zero direction is mainly in that output - and trying to move it somewhere else will give large interactions and poor performance - see Example 6.2 on page 222.
• p.222 and 223: Generally, it is a bad idea to try to get a deceoupled response for a plant with a RHP-zero. You have to pay for the decoupling by having poor response for all outputs (i.e., for a nxn plant with 1 RHP-zero in G(s), you get n RHP-zeros in T by requiring decoupling).
• Note that the RGA is a very usefull tool to indicate sensitivity to uncertainty (and in particular to diagonal input uncertainty which is always present. We have: A plant G(s) with large RGA-elements (say larger than 5-10) in the frequency-range imporrtant for feedback control is fundamentally difficult to control.
Generally, a diagonal controller (decentralized control) is insensitive to (multivariable) uncertainty - the problem is that it will not always give good performance even nominally - for example, for a plant with large RGA-elements.
• p. 228: For a single disturbance with model gd we may have the performance objective that the H-infinity norm of Sgd is less than 1. However, if G(s) has a RHP-zero z, then we must as a prerequisite require that |yz^H gd(z)| is less than 1. That is, gd(z) must be less than 1 in the output direction of the RHP-zero.
• p. 229: For a single disturbance input saturation poses no problem if all elements in the vector G^-1 gd are less than 1 at all frequencies (then we may even achieve perfect control, ie.e e=0).
If this is violated, then it may still be possible to achieve acceptable control (|e|<1) if (6.46) is satisfied for all singular values of G.
For some good exercises to test your understanding of the material in this chapter, check out Problem 1 in the Trondheim-Exam from 1996 (and its solution).

### Chapter 7. UNCERTAINTY AND ROBUSTNESS FOR SISO SYSTEMS

The main points are
```   1. Why use freqeuncy-domain uncertainty (neglected dynamics, etc.)
2. Additive uncertainty in Nyquist plot (frequency-domain)
3. Multiplicative uncertainty. Typical shape of weight w_I
4. Graphical derivation of RS-condition, |w_I T|<1.
5. Graphical derivation of RP-condition, |w_IT| + |w_P S| < 1.
```
In the MIMO chapter we rederive this using M Delta- structure and structured singular value (mu).

Do some of the Exercises on page 249-251 !!

### Chapter 8. ROBUST STABILITY AND PERFORMANCE ANALYSIS

Some main points:
```  1. How to derive M: Read Example 8.4 on p. 301 carefully.
2. How to test RS using M-Delta structure (SEc 8.1 and 8.5).
3. For convex set of perturbations (Delta's) (Sec. 8.5):
a. For real/complex Delta:
RS iff det(I - M Delta) neq 0, forall Delta, forall w
b. For complex Delta only:
RS iff rho(M Delta) <1 forall w
4. Delta full matrix (Sec. 8.6):
SISO:  RS iff |M|<1 forall w       (iff means "if and only if")
MIMO:  RS iff smax(M) < 1 forall w
(Sufficiency is obvious from small gain theorem,
necessity follows since any phase in Delta is allowed and any
direction in Delta is allowed).
5. Delta block diagonal (sec. 8.9):
By definition of mu: RS iff mu(M)<1
Here mu(M) is defined as the inverse of the smallest smax(Delta)
which makes det(I - M Delta) = 0.
6. Upper bound smax(D M D^-1) on mu(M) is derived by (Sec. 8.7)
i. EITHER: Noting that rho(M Delta) = rho(D M D^{-1} Delta) when
D is such that    D \Delta D^-1 = Delta
ii. OR: By using the idea of scaling in the M Delta block diagram
as in the book on page 310.
7. Sec. 8.8: Describes computation of mu and its bounds; rho and sigma
8. Sec. 8.10: H-infinity robust performance (RP) is a special case of
RS. See  "block-diagram" proof on page 324.
9. Summary of tests for NS, NP, RS and RP : see 8.10.2 on page 325 (!!)
```
Application (Section 8.11). It is shown how to:
```1. Derive P, N and M for the case with input uncertainty and
performance in terms of weighted sensitivity (see (8.29) and (8.123).

2. Derive requirements for NP, NP (hinfnorm = with full deltaP)
RS and RP (see (8.115)-(8.118)).

3. Apply these requirements to the SISO case and rederive the results
from Chapter 7, (see (8.124)-(8.127).
```

Note that there are two main approaches to get a robust design:

1. Make it robust to some general class of uncertainty. For SISO systems one may consider GM and PM or the peak of S or T. For MIMO systems the normalized coprime uncertainty provides a good general class, and the corresponding Glover-McFarlane H-infinity loopshaping design has proved useful in many applications.
2. Model the uncertainty in detail and make the system robust with respect to it. This may require a large effort both in terms of uncertainty modelling (in particular for parametric uncertainty) and in terms of analysis and design (which involves the structured singular value).
In pratice, one will therefore often start with approach 1, and switch to approach 2 if the resulting control performance is not satisfactory.

### APPENDIX A: MATRIX THEORY AND NORMS

Appemdix A gives an overview of matrix theory and norms.

Note in particular the eigenvalue decomposition:

A = T Lambda T^-1 (T contains the eigenvectors, Lambda the eigenvalues)

and the (for us more useful) singular value decomposition (SVD)

A = U Sigma V^H (U and V contain the output and input singular values)

The first (maximum) element in the diagonal matrix Sigma is the maximum singular value, which is equal to the induced 2-norm of the matrix A. Note that the columns in U and V are orthonormal, but that this is not the case for the eigenvectors in T.

Norms are used to generalize the concept of gain, and are extensively discussed in the Appendix.

The Appendix also lists the properties of the RGA-matrix

Lambda(A) = RGA(A) = A x A^-T (x denotes element-by-element multiplication)

Note that Lambda is here used for the RGA-matrix (and NOT the eigenvalue matrix). In the book Lambda (as a matrix) always means the RGA-matrix, and lambda_ij (as a scalar with two indices) refers to the ij'th element in the RGA-matrix, whereas lambda_i (as a scalar with one index) referes to the i'th eigenvalue.