Fag 43917 (uke 5)

Sigurd Skogestad ((no email))
Mon, 17 Feb 1997 18:11:42 +0100

Fag 43917 (uke 5)

Hallo,

I forelesningen 17. februar dekket vi deler av kap. 4: State controllability
med tank-eksemplet, Poler og nullpunkter (4.6), intern stabilitet,
stabiliserende regulatorer.
En oppsummering av kap. 4 (unntatt normer) er gitt under.

Det blir ekstra-forelesning paa tirsdag 25. februar kl. 15-17, rom B333
(MERK rommet). Jeg vil da dekke 4.9 og 4.10, samt det viktigste i kap. 5.
Prov og skaff deg en oversikt over kap. 5 til tirsdag!

Paa onsdag 26. feb. blir det vanlig forelesning kl. 101-12 i rom B222.
Jeg vil da ta for meg kap. 6 (som generaliserer kap. 5 til MIMO).

Hilsen Sigurd

--------------------------
Summary of Chapter 4
--------------------------

Chapter 4. ELEMENTS OF LINEAR SYSTEM THEORY

There is a lot of important material in this chapter (although some of the
details are of less importance).

4.1 System descriptions

For linear systems there are several alternative system
representations. The state-space representation (4.1.1) often
follows directly from a physical model, and is used in most
numerical calculations. The transfer function representation
(4.1.3) is a nice compact representation which yields invaluable
insights; it allows for series connections to be represented by
multiplication of transfer functions. It also leads directly to the
frequency response (4.1.4). You do not need to remember the
details about the coprime factorization (4.1.5), but you should
know that it is a factorization into two stable systems, and that it
is useful for representing the class of all stabilizing controllers,
and it forms the basis for the very useful "coprime uncertainty
description.

4.2 State controllability and state observability

These are important system theoretical concepts which you
should know the meaning of. Note that modes which are note
state controllable and/or state observable disappear in a minimal
realization of the system (a representation with the fewest
number of states).

There are many ways to check for state controllability and
observability, e.g. with Gramians etc. The method which yields
the most insight is probably to compute the input and output
directions associated with each pole (mode):

Let t and q be the right and t the left eigenvectors of the state
matrix A.

At = pt, q^H A = p q^H

then the pole output and input directions are

y_p = Ct, u_p = B^H q (4.69)

The mode (pole) p is not state observable if and only if (iff)
y_p=0, and the pole (pole) p is not state controllable iff u_p=0. In
other words, a system is state controllable iff y_p is nonzero for
all p, and a system is state observable iff u_p is nonzero for all p.

In MATLAB we use the following commands:

[T,P] = eig(A)
YP = C*T
[Q,P] = eig(A')
UP = B'*Q

Then the columns in YP are the output pole directions, and the
columns in UP are the input pole directions. As an example
consider the following SISO system with two states:

A = [-2 -2; 0 -4]; B = [1 ; 1]; C = [1 0];
and we get
YP = [1 0.71], UP = [1 0 ]

Thus, from YP both modes are state observable from the output,
but from UP only the first mode (the state at -4) is state
controllable from the input. A minimal realization of the system
thus has only one state, as is easily verified by computing the
transfer function

G(s) = C (sI-A)^-1 B = 1/(s+4).

Note Example 4.5 on state controllability of "bath tubs" in series.
This shows that a system may be "state controllable", but still not
be "controllable" in any practical sense. In the bath tub example,
we have only one input (the inlet temperature T0), but we want
to control four outputs (T1, T2, T3, T4). Independent control of 4
outputs using 1 input is clearly not possible (the plant is not
"functional controllable", see section 6.3, p. 218).

4.4 Poles

The poles are essentially the sum of the poles in the elements of
the transfer function, but to get the correct multiplicity a more
careful analysis is needed, and Theorem 4.2 is useful for hand
calculation when we know G(s).

4.5 Zeros

The zeros are values of s for which G(s) looses rank (see Def.
4.7). In general, there is no relationship between the elements of
the transfer function and its (multivariable) zeros. Theorem 4.3 is
useful for hand calculation when we know G(s).

4.6 More on poles and zeros

There are many important things in this section, especially read
carefully the small print remarks (4.6.2). Here are some things to
note:
1.For a square system G(s), the poles and zeros are
essentially the poles and zeros of det G(s). However, this
simple method can fail because of pole-zero cancellations
which actually are in different parts ("directions") of the
system. For example,

[ (s+2)/(s+1) 0 ] G(s) = [ 0 (s+1)/(s+2) ]

has a pole and zero at -1 and a pole and zero at -2, even
though det G(s) = 1.
2.If G^-1(s) exists, then the poles of G(s) are the zeros of
G^-1(s) and reverse (as for SISO systems).
3.A system with all the states as outputs has no zeros. This
explains why zeros where almost forgotten in the
highdays of state-space theory from about 1965 to 1985.
(But note that there may be zeros from the inputs u to the
controlled outputs y, and if these are in the right half plane,
then these pose fundamental difficulties, even if we
measure all the states).
4.Most systems do have zeros, see (4.73)
5.Also non-square systems may have zeros; consider
carefully item 13 on page 136.
6.Moving poles and zeros. As a basis consider G(s)
Feedback, G(I+GK)^-1. Poles (of G) are moved
and zeros (of G) are unchanged (in addition we get
as zeros the poles of K).
Series, GK. Poles and zeros are unchanged (with
the exception of possible cancellations between
poles and zeros in G and K).
Parallell, G+K. Poles are unchanges, zeros are
moved (but note that physically a parallell
interconnection requires an additional manipulated
input).

4.7 Internal stability of feedback systems

The main lesson is: Avoid any cancellations of unstable
(RHP) poles between G and K.

Otherwise, the system will always be internally unstable (it may
seem stable when viewed from one particular input to a
particular output, but there will some hidden unstable mode,
which will appear in some other input-output transfer function).

Also note that it does matter whether the cancellation is perfect
or not - any cancellation between two separate physical blocks,
in between which there are physical signals, will eventually lead
to some signal "blowing up" somewhere.

To be sure that a feedback system consisting of G and K is
(internally) stable, you must then consider all the 4 closed-loop
transfer function involving inputs and outputs between the two
blocks (Theorem 4.4)

However, if you know there are no pole-zero cancellations
between G and K, then you need only check one of them, e.g. S =
(I+GK)^-1 (Theorem 4.5).

From the rather simple requirement that unstable pole-zero
cancellations are disallowed, some quite powerful conditions can
be derived, see section 4.7.1. For example,

If G has a RHP-zero at z, then so has also T (and since T
is the closed-loop transfer function from r to y, we see
immediately that this imposes a fundamental limitations
on performance - see Chapters 5 and 6 for more details).
The following interpolation constraints then follow (which
generalize (4.83)):

y_z^H T(z) = 0, y_z^H S(z) = y_z^H (6.4)

where y_z is the output zero direction. In words, T(z) must
be zero in the output direction y_z, and since S=I-T, S(z)
must be one in this direction.

If G has a RHP-pole at p, then S has a RHP-zero at p
(the performance implications if this is not so clear;
essentially it requires tight control at s=p, which may be be
a serious limitation if there is something else, such as a
RHP-zero or input saturation, which makes this difficult).
The following interpolation constraints then follow:

S(p) y_p = 0 , T(p) y_p = y_p (6.5)

where y_p is the output pole direction.

4.8 Stabilizing controllers

This is the Q-parameterization where K(Q) yields all stabilizing
controllers when Q is varied freely over all stable Q(s).

This may have significant advantages in controller synthesis
where the objective is to a find a K which minimizes some norm
of N(K):
1.The search over stabilizing K's (which involves checking
the stability of closed-loop transfer functions) is replaced
by a search over stable Q.
2.The closed-loop transfer functions turn out to be affine in
Q, e.g. S or T can be written H1 + H2 Q H3, i.e., it is affine
in Q, which may significantly simplify the optimization (e.g.
compared to GK(I+GK)^-1 which is fractional in K).

Note that the IMC-structure in Fig. 4.5 which yields a
parameterization of all stable controllers for a stable plant G. The
feedback signal in the IMC-structure is the difference between
the predicted and actual (measured) output (=d_y), so Q may be
designed in an open-loop fashion similar to the design of a
feed-forward controller (the IMC design procedure).

4.9 Stability analysis in the frequency domain

Let Pol denote the number of unstable poles in L(s) = G(s)K(s),
and consider a negative feedback system. The Nyquist stability
condition is a way of checking the closed-loop stability, e.g. of
(I+L)^-1, by considering only the open-loop transfer function L.

For a SISO system the Nyquist stability says that the system is
stable iff (if and only if) L(jw) makes Pol anti-clockwise
encirclements of the point -1, which is equivalent to requiring
that 1+L(jw) makes Pol anti-clockwise encirclements of the
origin.

The MIMO Nyquist stability criteria says that the last statement
holds also for MIMO systems if we consider det(I+L):
Theorem 4.7 The closed-loop system is stable iff
det(I+L(jw)) makes Pol anti-clockwise encirclements of
the origin.

The Small gain theorem for SISO system says that the system
is stable if |L(jw)| < 1 at all frequencies w. This is clearly a very
conservative condition as no phase information is taken into
account. For example, L(s)=k/s+e gives a stable closed-loop
system for all values of k, whereas we from the small gain
theorem need to require k < e to guarantee stability.

The "tightest" (least conservative) generalization of the small
gain theorem to MIMO systems is the spectral radius stability
condition (Theorem 4.9), which says that the system is
closed-loop stable if rho(L(jw)<1 at all frequencies w.

This may be understood as follows: Recall that the spectral
radius rho is the largest eigenvelue magnitude, rho = max_i |
lambda_i |. The signals which "return" in the same direction after
"one turn around the loop" are magnified by the eigenvalues
lambda_i (and the directions are the eigenvectors x_i):

L x_i = lambda_i x_i.

So if all the eigenvalues lambda_i are less than 1 in magnitude, all
signals become smaller after each round, and the closed-loop
system is stable.

4.10 System norms
(dette tar vi paa tirsdag!)