For related work on plantwide control see here

Self-optimizing control is a strategy for selecting controlled variables. It is distinguished by the fact that an economic objective function is adopted as a selection criterion. Thus, "self-optimizing" control is the effective translation of economic objectives into control objectives.

Second, we need to find controlled variables associated with the unconstrained degrees of freedom. These are the less obvious self-optimizing control variables. We are looking for some "magic" variables, which when kept constant, indirectly achieves optimal operation in spite of disturbances.

** "Self-optimizing control is when acceptable operation under all conditions is achieved with
constant setpoints for the controlled variables."
**

Here "acceptable operation" is more precisely defined by the value of the loss, and "under all conditions" means for the defined disturbances, plant changes and implementation errors.

To include biological system the term "self-optimizing control" should possibly be broadened further, for example, by replacing "with constant setpoints for the controlled variables" by "by controlling the right variables" or something similar.

The main issues in selecting controlled variables are disturbances and implementation error (noise, measurement error). All results below are based on a steady-state analysis, since the economics of most processes are determined mainly by the steady.state behavior, but the extension to batch processes (with optimal trajectories) is simple.

A survey of self-optimizing control was published in 2017:

A good introduction to self-optimizing control, with lots of simple examples, is the following paper:

An extremely simple method ("nullspace method") has been derived by Vidar Alstad which gives the optimal linear measurement combination c=Hy (with zero loss) for the case with no implementation error (i.e., noise free-case, n=0). It is briefly descibed in the first paper, and more details can be found here

The following paper applies the "brute-force approach" to the Tennessee-Eastman challenge problem and discusses in particular the feasibility issue (which, by the way, a local method will not detect)

The following paper discusses in more detail the issue of back-off and also shows that it is optimal in some cases - in particular to be feasible - to use "robust" setpoints rather than the nomically optiomal setpoints:

The following summary of the maximum gain rule may be useful: Pages from corrected version of book (July 2007)

The maximum gain rule has been applied to many examples. In particular, for scalar cases it is very simple (and efficient!) to use. For multivariable cases, maximizing the minimum singular value is usually OK, but it may fail for some ill-conditioned processes, like distillation: