Sigurd Skogestad. Work on self-optimizing control

S. Skogestad ``Plantwide control: the search for the self-optimizing control structure'', J. Proc. Control, 10, 487-507 (2000).


For related work on plantwide control see here

"Self-optimizing" control is the effective translation of economic objectives into control objectives.

What should we control?

First, we should control the active constraints (which are optimal from an economic point of view in terms of minimizing the cost). Any deviation from the active constraints (denoted "back-off") gives a loss. These may be viewed as the obvious self-optimizing variables.

Second, we need to find controlled variables associated with the unconstrained degrees of freedom. These are the less obvious self-optimizing control variables. We are looking for some "magic" variables, which when kept constant, indirectly achieves optimal operation in spite of disturbances.

Self-optimizing control

More generally, the following definition of self-optimizing control is probably useful.

"Self-optimizing control is when acceptable operation under all conditions is achieved with constant setpoints for the controlled variables."

Here "acceptable operation" is more precisely defined by the value of the loss, and "under all conditions" means for the defined disturbances, plant changes and implementation errors.

To include biological system the term "self-optimizing control" should possibly be broadened further, for example, by replacing "with constant setpoints for the controlled variables" by "by controlling the right variables" or something similar.

The main issues in selecting controlled variables are disturbances and implementation error (noise, measurement error). All results below are based on a steady-state analysis, since the economics of most processes are determined mainly by the steady.state behavior, but the extension to batch processes (with optimal trajectories) is simple.

A good introduction to self-optimizing control, with lots of simple examples, is the following paper:

  • S. Skogestad, ``Near-optimal operation by self-optimizing control: From process control to marathon running and business systems'', Computers and Chemical Engineering, 29 (1), 127-137 (2004).
  • Corresponding slides from the PSE-conference (2004)
  • Direct loss evaluation method

    This paper defines the issues more carefully and describes the "brute-force" approach (direct loss evaluation) for selecting self-optimizing controlled variables:

  • S. Skogestad, ``Plantwide control: the search for the self-optimizing control structure'', J. Proc. Control, 10, 487-507 (2000).
  • The following paper applies the "brute-force approach" to the Tennessee-Eastman challenge problem and discusses in particular the feasibility issue (which, by the way, a local method will not detect)

  • T. Larsson, K. Hestetun, E. Hovland, and S. Skogestad, ``Self-Optimizing Control of a Large-Scale Plant: The Tennessee Eastman Process'', Ind. Eng. Chem. Res., 40 (22), 4889-4901 (2001).
  • The following paper discusses in more detail the issue of back-off and also shows that it is optimal in some cases - in particular to be feasible - to use "robust" setpoints rather than the nomically optiomal setpoints:

  • M.S. Govatsmark and S. Skogestad, ``Selection of controlled variables and robust setpoints'', Ind.Eng.Chem.Res, 44 (7), 2207-2217 (2005).
  • Local method ("maximum gain rule")

    A local analysis, and particular the "maximum gain rule" (maximize the minimum singular value) is more efficient numerically. The best alternatives can then be analyzed in more detail using the "brute-force" method. The maximum gain rule is derived in the following paper, together with the "exact local method" and the use of optimal linear measurement combinations:

  • I.J. Halvorsen, S. Skogestad, J.C. Morud and V. Alstad, ``Optimal selection of controlled variables'', Ind. Eng. Chem. Res., 42 (14), 3273-3284 (2003).
  • The following summary of the maximum gain rule may be useful: Pages from corrected version of book (July 2007)

    The maximum gain rule has been applied to many examples. In particular, for scalar cases it is very simple (and efficient!) to use. For multivariable cases, maximizing the minimum singular value is usually OK, but it may fail for some ill-conditioned processes, like distillation:

  • E.S. Hori, S. Skogestad and M.A. Al-Arfaj, ``Self-optimizing control configurations for two-product distillation columns'', Proceedings Distillation and Apsorption 2006 London, UK, 4-6 Sept. 2006,
  • Measurement combinations as controlled variables

    1. Nullspace method

    An extremely simple method ("nullspace method") has been derived by Vidar Alstad which gives the optimal linear measurement combination c=Hy (with zero loss) for the case with no implementation error (i.e., noise free-case, n=0). It is briefly descibed in the first paper, and more details can be found here

  • V. Alstad and S. Skogestad, ``Null Space Method for Selecting Optimal Measurement Combinations as Controlled Variables'', Ind.Eng.Chem.Res, 46 (3), 846-853 (2007).
  • 2. Optimal combination

    The nullspace method neglects implementation error, and originally I thought a numerical search was required to find the optimal combination H for the case with implentation error included. However, a trick m ay use to turn the seeming nonvex optimization problem into a constrained QP problem. From this an explicit formula may be derived, see

  • V. Alstad, S. Skogestad and E.S. Hori, ``Optimal measurement combinations as controlled variables'', Journal of Process Control, Vol.19, 128-148 (2009).