Read this after you have gone through a chapter, to see that you have understood and covered the main points.
1.1: This book covers more than the traditional controller design and control system analysis (steps 9-10).
1.2 Note defs of NS,NP,RS and RP
1.3 Note definition of proper, strictly proper, etc.
1.4 SCALING!! Very IMPORTANT in applications, both for model analysis (input-output controllability) and for controller design. Main idea: Variables are dimensionless with 1 as maximum value.
With scaling you make initial decision regarding performance. This makes weight selection simple later (may often select identity weights if initial scaling is reasonable!)
1.5 Linear models. Everything in this book is based on linear models which are often obtained by numerical differentiation of first-principle model. This is illustrated by an example (room heating).
1.6 Notation. Most important notation: G,K,L,S and T=I-S.
Alternative implementations of two degrees-of-freedom controller: see Fig. 4.5.
General control configuration: P and K (will get back to it in Chapter 3), see for example Fig. 3.13 which shows one-degree-freedom control system in PK-structure. P is "generalized" plant
Generalized control system: Let all signals affecting the system (which we cannot do anything about) be collected in w (disturbances, noise, reference changes). Let the signals we want to be small be collected in z (typically control errors and manipulated inputs). The control objective is then: based on measurements and other signal information in v (including references), use a controller (algorithm) K to generate maniuplated input signals u, such that the norm from w to z is small (z remains small in spite of variations in w).
Note: A reference r can be included in both w (exogeneous signals) and v (sensed signals) and indirectly also in z (e.g. as y-r).
Question to the reader: What other signals can be included several places?
The main idea is to make the reader familiar with ideas, in particular from the frequency domain, which are crucial for understanding the behavior of MIMO systems.
Note that the "frequency range corresponding to the closed-loop bandwidth wB" generally is the most important. People with experience from the time domain only may wonder what value wB has. Essentially, if tau_cl is the closed-loop time constant, then wb = 1 / tau_cl (approximately). Here the "closed-loop time constant" tau_cl is approximately the time it takes from the system encounters an upset (disturbance, reference change) and until the output has covered 63% of its way back to or towards its (new) steady-state value.
Some points worth noting:
2.1 Non-mimimum phase systems are systems with time delays or RHP-zeros (inverse response for SISO system).
2.2 Feedback control. Note again the definition of L, S and T and how they are related to the closed-loop response.
2.4 Evaluating closed-loop performance. The time domain specifications in Fig. 2.9 are not used later in the book, mainly because of difficulties in formulating easily solvable mathematical problems.
Note maximum peaks of S and T, denoted Ms amd Mt - these are the H-infinity norms!
Note definitions of bandwidth (wB in terms of S) and gain crossover frequency (wc in terms of L).
2.6 Loop shaping. This is the classical trial-and-error approach where we at the end of the section show how the phase correction step can be "automized" by the Glover-McFarlane H-infinity loopshaping.
Note that for disturbance rejection we typically want |L|=|Gd| if we want to avoid using excessive inputs. For disturbances at the inputs, Gd=G, this means that we want a controller which is a constant gain, |K|=1 (no dynamics required for performance, but may need it for robustness, and may also want some integral action at low frequencies).
2.7 Shaping closed-loop transfer functions. A reasonable design specification is weighted sensitivity, p1= |wP S| < 1. To avoid excessive inputs we also want p2= |w_u KS| small . We combine these into sqrt(p1*p1 + p2*p2), see (2.78) which should be minimized. This is equivalent to a mixed sensitivity (S/KS) problem as discussed in Exercise 2.11.
*** Important *** Make sure you understand the meaning of the performance weight wp.
Weight which requires slope of -1 for L at lower frequencies:
wp(s) = s/M + wB / s + wB A M = max peak on S wB = frequency where magnitude of S crosses 1 A = steady-state offsetStudy Fig. 2.26 and 2.27 carefully !!!!
****** Homework:
Use H-infinity loopshaping and H-infinity S/KS to design controllers for the disturbance process.
More precicely: Do Exercise 2.3 (p.33) AND redo Example 2.11. (also see the MATLAB files)
This will: 1. Give a good introduction to the software (MATLAB and mu-toolbox and robust control toolbox). 2. Introduce you to design methods which by introducing the max. singular value can be extended directly to MIMO systems.
For SISO systems the GAIN |Gd|/|d| is independent of the input magnitude.
Also for MIMO systems the GAIN ||Gd|| / ||d|| (where ||d|| is some norm) is independent of the magnitude ||d||, but it DOES depend on its direction, see Section 3.3.1.
A plant is said to be ILL-CONDITIONED if the gain depends strongly on the input direction. It is quantified by the condition number gamma(G) (which is much larger than 1 for an ill-conditioned plant).
The main rule for evaluating transfer functions is: Start from the output and write down the transfer functions as you meet them going to the input. If you exit a feedback loop then we get a term (I-L)^-1 where L is the transfer function around the loop (gain going backwards).
Note that the order (left to right) in the block diagram and the formulas is reversed!
The rules etc. given in this section are important and should be memorized!
Make sure you understand the singular value decomposition.
G = U Sigma V^H V: columns give input directions U: columns give output directions Sigma: Gains relating these directionsThe largest singular value (smax) is the induced 2-norm.
Condition number (gamma): max. singular value / min. singular value
For a square matrix, gamma = smax(G) smax(G^-1), so the condition number is large if the product of the largest element in G and G^-1 is large.
Note that the condition number depends strongly on scaling. It may be scaled (minimized) at the input or output or both.
Relative Gain Array (RGA) for square G:
RGA = Lamda(G) = G x G^-TUseful measure - easy to compute. The difference between RGA and I (the identity matrix) says something about two-way interactions. But the RGA also has many other uses! Note that our definition says nothing about decentralized control (although the RGA is useful for decentralized control).
We have that || RGA ||_sum is approximately equal to the minimized condition number.
****** Example MATLAB (Distillation process): G0 = [87.8 -86.4; 108.2 -109.6]; [U,sval,V]=svd(G0) % or use: [U,sval,V]=vsvd(G0) for G freq.-varying rga = G0.*pinv(G0.') % or use: rga = vrga(G0) - see function vrga rgasum = sum(sum(abs(rga))) % sum-norm of RGA (sum of elements) gamma = cond(G0) % or use: gamma = vcond(G) for G freq.-varying gammamin = condmin(G0) % see function condmin gammamini = condmini(G0) % see function condmini gammamino = condmino(G0) % see function condmino
1. Decentralized control; K is diagonal o Does nothing about interaction - poor performance (even nominally) o But normally robust o May often tune on-line (little modelling effort) o Important issue: Select pairing (to achieve G diagonally dominant) 2. Compensator approaches and decoupling o First make something simple which takes care of interactions o Then design diagonal controller for performance etc. o Decoupler: E,g. inverse -based controller K = l G^{-1} Often sensitive to uncertainty, Also: Decoupling may not be required, e.g. for disturbance rejection. o SVD-controller: More flexible. Usually based on SVD of G around crossover frequency.
Square MIMO plant: Poles and zeros are in most cases poles and zeros of det G(s).
More generally: Zeros are where G(s) looses rank - more about this in Chapter 4.
NOTE: Poles are essentially poles of elements of G, but generally the zeros are NOT the zeros of elements of G !!
Right half plane (RHP) zero: Fundamental limitation on achievable control (as for SISO). Why? G(s)^-1 is unstable.
MIMO pole and zeros have directions.
Poles: Look at SVD of G(p): Pole direction is direction where G(p) is infinite. Zeros: Look at SVD of G(z). Zero direction is where gain of G(z) is zero (i.e. it goes to zero for s=z). Numerical calculation: Use state-space form, see ch. 4See 2x2 example with H-infinity design: Note that we can MOVE effect of RHP-zero to particular outputs (unless zero is pinned). Try it yourself with the MATLAB-file Expl3_8.m !
See Matlab file Sec3_7_1.m
NOTE: We use an inverse-based controller. This problem CANNOT happen with a diagonal controller with no peak on S or T.
See Matlab file Sec3_7_2.m
Note that we can handle non-square plants, we need not measure all the outputs, etc. etc.
Note that the general formulation (P) has already been used in the files for H-infinty design (S/KS).
May use sysic in the MATLAB mu-toolbox to generate P.
Do Exercise 3.10 carefully (p. 104)
Also do the following: Exercise 3.15, 3.16, 3.17, 3.18, 3.19, 3.20, 3.21, 3.23, 3.24
There are many ways to check for state controllability and observability, e.g. with Gramians etc. The method which yields the most insight is probably to compute the input and output directions associated with each pole (mode):
Let t and q be the right and the left eigenvectors of the state matrix A.
At = pt, q^H A = p q^H then the pole output and input directions are y_p = Ct, u_p = B^H q (4.69)The mode (pole) p is not state observable if and only if (iff) y_p=0, and the pole (pole) p is not state controllable iff u_p=0. In other words, a system is state controllable iff y_p is nonzero for all p, and a system is state observable iff u_p is nonzero for all p.
In MATLAB we use the following commands:
[T,P] = eig(A) YP = C*T [Q,P] = eig(A') UP = B'*QThen the columns in YP are the output pole directions, and the columns in UP are the input pole directions. As an example consider the following SISO system with two states:
A = [-2 -2; 0 -4]; B = [1 ; 1]; C = [1 0]; which has poles (eigenvalues) at -2 and -4. We get using the commands above YP = [1 0.71], UP = [0 1 ] (make sure the order in YP and UP is consistent with the order of the poles in P)Thus, from YP both modes are state observable from the output, but from UP only the second mode (the state at -4) is state controllable from the input. A minimal realization of the system thus has only one state, as is easily verified by computing the transfer function
G(s) = C (sI-A)^-1 B = 1/(s+4).
Note Example 4.5 on state controllability of "bath tubs" in series. This shows that a system may be "state controllable", but still not be "controllable" in any practical sense. In the bath tub example, we have only one input (the inlet temperature T0), but we want to control four outputs (T1, T2, T3, T4). Independent control of 4 outputs using 1 input is clearly not possible (the plant is not "functional controllable", see section 6.3, p. 218).
[ (s+2)/(s+1) 0 ] G(s) = [ 0 (s+1)/(s+2) ]has a pole and zero at -1 and a pole and zero at -2, even though det G(s) = 1.
Otherwise, the system will always be internally unstable (it may seem stable when viewed from one particular input to a particular output, but there will some hidden unstable mode, which will appear in some other input-output transfer function).
Also note that it does matter whether the cancellation is perfect or not - any cancellation between two separate physical blocks, in between which there are physical signals, will eventually lead to some signal "blowing up" somewhere.
To be sure that a feedback system consisting of G and K is (internally) stable, you must then consider all the 4 closed-loop transfer function involving inputs and outputs between the two blocks (Theorem 4.4)
However, if you know there are no pole-zero cancellations between G and K, then you need only check one of them, e.g. S = (I+GK)^-1 (Theorem 4.5).
From the rather simple requirement that unstable pole-zero cancellations are disallowed, some quite powerful conditions can be derived, see section 4.7.1. For example,
y_z^H T(z) = 0, y_z^H S(z) = y_z^H (6.4)where y_z is the output zero direction. In words, T(z) must be zero in the output direction y_z, and since S=I-T, S(z) must be one in this direction.
S(p) y_p = 0 , T(p) y_p = y_p (6.5)where y_p is the output pole direction.
This may have significant advantages in controller synthesis where the objective is to a find a K which minimizes some norm of N(K):
Note that the IMC-structure in Fig. 4.5 which yields a parameterization of all stable controllers for a stable plant G. The feedback signal in the IMC-structure is the difference between the predicted and actual (measured) output (=d_y), so Q may be designed in an open-loop fashion similar to the design of a feed-forward controller (this is used in the IMC design procedure).
For a SISO system the Nyquist stability says that the system is stable iff (if and only if) L(jw) makes Pol anti-clockwise encirclements of the point -1, which is equivalent to requiring that 1+L(jw) makes Pol anti-clockwise encirclements of the origin.
The MIMO Nyquist stability criteria says that the last statement holds also for MIMO systems if we consider det(I+L):
The Small gain theorem for SISO system says that the system is stable if |L(jw)| < 1 at all frequencies w. This is clearly a very conservative condition as no phase information is taken into account. For example, L(s)=k/s+e gives a stable closed-loop system for all values of k, whereas we from the small gain theorem need to require k < e to guarantee stability.
The "tightest" (least conservative) generalization of the small gain theorem to MIMO systems is the spectral radius stability condition (Theorem 4.9), which says that the system is closed-loop stable if rho(L(jw)<1 at all frequencies w.
This may be understood as follows: Recall that the spectral radius rho is the largest eigenvelue magnitude, rho = max_i | lambda_i |. The signals which "return" in the same direction after "one turn around the loop" are magnified by the eigenvalues lambda_i (and the directions are the eigenvectors x_i):
L x_i = lambda_i x_i.So if all the eigenvalues lambda_i are less than 1 in magnitude, all signals become smaller after each round, and the closed-loop system is stable.
Some key points:
A plant is controllable if we at each frequency can keep the control error e = y- r less than 1, for any disturbance d with magnitude less than 1 (and in particular for |d|=1), and for any reference change r with magnitude less than R (and in particulat for |r|=R), using manipulated inputs u which are less than 1 in magnitude.
This definition assumes that the variables have been scaled as outlined in Chapter 1, page 6. SCALING IS IMPORTANT !
Here are some additional points to note:
1. Why use freqeuncy-domain uncertainty (neglected dynamics, etc.) 2. Additive uncertainty in Nyquist plot (frequency-domain) 3. Multiplicative uncertainty. Typical shape of weight w_I 4. Graphical derivation of RS-condition, |w_I T|<1. 5. Graphical derivation of RP-condition, |w_IT| + |w_P S| < 1.In the MIMO chapter we rederive this using M Delta- structure and structured singular value (mu).
Do some of the Exercises on page 249-251 !!
1. How to derive M: Read Example 8.4 on p. 301 carefully. 2. How to test RS using M-Delta structure (SEc 8.1 and 8.5). 3. For convex set of perturbations (Delta's) (Sec. 8.5): a. For real/complex Delta: RS iff det(I - M Delta) neq 0, forall Delta, forall w b. For complex Delta only: RS iff rho(M Delta) <1 forall w 4. Delta full matrix (Sec. 8.6): SISO: RS iff |M|<1 forall w (iff means "if and only if") MIMO: RS iff smax(M) < 1 forall w (Sufficiency is obvious from small gain theorem, necessity follows since any phase in Delta is allowed and any direction in Delta is allowed). 5. Delta block diagonal (sec. 8.9): By definition of mu: RS iff mu(M)<1 Here mu(M) is defined as the inverse of the smallest smax(Delta) which makes det(I - M Delta) = 0. 6. Upper bound smax(D M D^-1) on mu(M) is derived by (Sec. 8.7) i. EITHER: Noting that rho(M Delta) = rho(D M D^{-1} Delta) when D is such that D \Delta D^-1 = Delta ii. OR: By using the idea of scaling in the M Delta block diagram as in the book on page 310. 7. Sec. 8.8: Describes computation of mu and its bounds; rho and sigma 8. Sec. 8.10: H-infinity robust performance (RP) is a special case of RS. See "block-diagram" proof on page 324. 9. Summary of tests for NS, NP, RS and RP : see 8.10.2 on page 325 (!!)Application (Section 8.11). It is shown how to:
1. Derive P, N and M for the case with input uncertainty and performance in terms of weighted sensitivity (see (8.29) and (8.123). 2. Derive requirements for NP, NP (hinfnorm = with full deltaP) RS and RP (see (8.115)-(8.118)). 3. Apply these requirements to the SISO case and rederive the results from Chapter 7, (see (8.124)-(8.127).
Note that there are two main approaches to get a robust design:
Note in particular the eigenvalue decomposition:
A = T Lambda T^-1 (T contains the eigenvectors, Lambda the eigenvalues)
and the (for us more useful) singular value decomposition (SVD)
A = U Sigma V^H (U and V contain the output and input singular values)
The first (maximum) element in the diagonal matrix Sigma is the maximum singular value, which is equal to the induced 2-norm of the matrix A. Note that the columns in U and V are orthonormal, but that this is not the case for the eigenvectors in T.
Norms are used to generalize the concept of gain, and are extensively discussed in the Appendix.
The Appendix also lists the properties of the RGA-matrix
Lambda(A) = RGA(A) = A x A^-T (x denotes element-by-element multiplication)
Note that Lambda is here used for the RGA-matrix (and NOT the eigenvalue matrix). In the book Lambda (as a matrix) always means the RGA-matrix, and lambda_ij (as a scalar with two indices) refers to the ij'th element in the RGA-matrix, whereas lambda_i (as a scalar with one index) referes to the i'th eigenvalue.