Plantwide control - A review and a new design procedure

Truls Larsson1 and Sigurd Skogestad2
Department of Chemical Engineering
Norwegian University of Science and Technology
N-7034 Trondheim Norway

July 8, 2003

Abstract:

Most (if not all) available control theories assume that a control structure is given at the outset. They therefore fail to answer some basic questions that a control engineer regularly meets in practice (Foss, 1973): ``Which variables should be controlled, which variables should be measured, which inputs should be manipulated, and which links should be made between them?'' These are the questions that plantwide control tries to answer.

There are two main approaches to the problem, a mathematically oriented approach (control structure design) and a process oriented approach. Both approaches are reviewed in the paper. Emphasis is put on the selection of controlled variables (``outputs''), and it is shown that the idea of ``self-optimizing control'' provides a link between steady-state optimization and control.

We also provide some definitions of terms used within the area of plantwide control.

Introduction

A chemical plant may have thousands of measurements and control loops. By the term plantwide control it is not meant the tuning and behavior of each of these loops, but rather the control philosophy of the overall plant with emphasis on the structural decisions. The structural decision include the selection/placement of manipulators and measurements as well as the decomposition of the overall problem into smaller subproblems (the control configuration).

In practice, the control system is usually divided into several layers. Typically, layers include scheduling (weeks), site-wide optimization (day), local optimization (hour), supervisory/predictive control (minutes) and regulatory control (seconds); see Figure  [*].

Figure: Typical control hierarchy in a chemical plant.
\begin{figure}\centerline {%%
<tex2html_file> ...

The optimization layer typically recomputes new setpoints only once an hour or so, whereas the feedback layer operates continuously. The layers are linked by the controlled variables, whereby the setpoints is computed by the upper layer and implemented by the lower layer. An important issue is the selection of these variables.

Of course, we could imagine using a single optimizing controller which stabilizes the process while at the same time perfectly coordinates all the manipulated variables based on dynamic on-line optimization. There are fundamental reasons why such a solution is not the best, even with todays and tomorrows computing power. One fundamental reason is the cost of modeling, and the fact that feedback control, without much need for models, is very effective when performed locally. In fact, by cascading feedback loops, it is possible to control large plants with thousands of variables without the need to develop any models. However, the traditional single-loop control systems can sometimes be rather complicated, especially if the cascades are heavily nested or if the presence of constraints during operation make it necessary to use logic switches. Thus, model based control should be used when the modeling effort gives enough pay-back in terms of simplicity and/or improved performance, and this will usually be at the higher layers in the control hierarchy.

A very important (if not the most important) problem in plantwide control is the issue of determining the control structure:

Note that that we are here not interested in what should be inside the boxes (which is the controller design or tuning problem). More precisely, control structure design is defined as the structural decisions involved in control system design, including the following tasks ((Foss, 1973); (Morari, 1982); (Skogestad and Postlethwaite, 1996))
  1. Selection of controlled variables $c$ (``outputs''; variables with setpoints)
  2. Selection of manipulated variables $m$ (``inputs'')
  3. Selection of measurements $v$ (for control purposes including stabilization)
  4. Selection of control configuration (a structure interconnecting measurements/setpoints and manipulated variables, i.e. the structure of the controller $K$ which interconnects the variables $c_s$ and $v$ (controller inputs) with the variables $m$)
  5. Selection of controller type (control law specification, e.g., PID, decoupler, LQG, etc.).
In most cases the control structure design is solved by a mixture of a top-down consideration of control objectives and which degrees of freedom are available to meet these (tasks 1 and 2), and a with a bottom-up design of the control system, starting with the stabilization of the process (tasks 3,4 and 5).

In most cases the problem is solved without the use of existing theoretical tools. In fact, the industrial approach to plantwide control is still very much along the lines described by Page Buckley in his book from 1964. Of course, the control field has made many advances over these years, for example, in methods for and applications of on-line optimization and predictive control. Advances have also been made in control theory and in the formulation of tools for analyzing the controllability of a plant. These latter tools can be most helpful in screening alternative control structures. However, a systematic method for generating promising alternative structures has been lacking. This is related to the fact that plantwide control problem itself has not been well understood or even acknowledged as important.

The control structure design problem is difficult to define mathematically, both because of the size of the problem, and the large cost involved in making a precise problem definition, which would include, for example, a detailed dynamic and steady state model. An alternative to this is to develop heuristic rules based on experience and process understanding. This is what will be referred to as the process oriented approach.

The realization that the field of control structure design is underdeveloped is not new. In the 1970's several ``critique'' articles where written on the gap between theory and practice in the area of process control. The most famous is the one of Foss (1973) who made the observation that in many areas application was ahead of theory, and he stated that

The central issue to be resolved by the new theories are the determination of the control system structure. Which variables should be measured, which inputs should be manipulated and which links should be made between the two sets.     ...      The gap is present indeed, but contrary to the views of many, it is the theoretician who must close it.
A similar observation that applications seem to be ahead of formal theory was made by Findeisen et al. (1980) in their book on hierarchical systems (p. 10).

Many authors point out that the need for a plantwide perspective on control is mainly due to changes in the way plants are designed - with more heat integration and recycle and less inventory. Indeed, these factors lead to more interactions and therefore the need for a perspective beyond individual units. However, we would like to point out that even without any integration there is still a need for a plantwide perspective as a chemical plant consists of a string of units connected in series, and one unit will act as a disturbance to the next, for example, all units must have the same throughput at steady-state.

Outline

We will first discuss in more detail some of the terms used above and provide some definitions. We then present a review of some of the work on plantwide control. In section [*] we discuss the mathematically oriented approach (control structure design). Then, in section [*] we look at the process oriented approach. In section [*] we consider a fairly simple plant consisting of reactor, separator and recycle. In section 7 we consider the most studied plantwide control problem, namely the Tennessee Eastman problem introduced by Downs and Vogel (1993), and we discuss how various authors have attempted to solve the problem. Finally, in section 8 we propose a new plantwide control design procedure.

Terms and definitions

We here make some comments on the terms introduced above, and also attempt to provide some more precise definitions, of these terms and some additional ones.

Let us first consider the terms plant and process, which in the control community are almost synonymous terms. The term plant is somewhat more general than process: A process usually refers to the ``process itself'' (without any control system) whereas a plant may be any system to be controlled (including a partially controlled process). However, note that in the chemical engineering community the term plant has a somewhat different meaning, namely as the whole factory which consists of many process units; the term plantwide control is derived from this meaning of the word plant.

Let us then discuss the two closely related terms layer and level which are used in hierarchical control. Following the literature, e.g. Findeisen et al. (1980), the correct term in our context is layer. In a layer the parts act at different time scales and each layer has some feedback or information from the process and follows setpoints given from layers above. A lower layer may not know the criterion of optimality by which the setpoint has been set. A multi-layer system cannot be strictly optimal because the actions of the higher layers are discrete and thus unable to follow strictly the optimal continuous time pattern. (On the other hand, in a multilevel system there is no time scale separation and the parts are coordinated such that there are no performance loss. Multilevel decomposition may be used in the optimization algorithm but otherwise is of no interest here.)

Control is the adjustment of available degrees of freedom (manipulated variables) to assist in achieving acceptable operation of the plant. Control system design may be divided into three main activities

  1. Control structure design (structural decisions; the topic of this paper)
  2. Controller design (parametric decisions)
  3. Implementation

The term control structure design, which is commonly used in the control community, refers to the structural decisions in the design of the control system. It is defined by the five tasks given in the introduction). The result from the control structure design is the control structure (alternatively denoted the control strategy or control philosophy of the plant).

The term plantwide control is used only in the process control community. One could regard plantwide control as the ``process control'' version of control structure design, but this is probably a bit too limiting. In fact, Rinard and Downs (1992) refer to the control structure design problem as defined above as the ``strict definition of plantwide control'', and they point out that plantwide control also include important issues such as the operator interaction, startup, grade-change, shut-down, fault detection, performance monitoring and design of safety and interlock systems. This is also in line with the discussion by Stephanopoulos (1982).

Maybe a better distinction is the following: Plantwide control refers to the structural and strategic decisions involved in the control system design of a complete chemical plant (factory), and control structure design is the systematic (mathematical) approach for solving this problem.

The control configuration, is defined as the restrictions imposed by the overall controller $K$ by decomposing it into a set of local controllers (sub-controllers), units, elements, blocks) with predetermined links and possibly with a predetermined design sequence where sub-controllers are designed locally.

Operation involves the behavior of the system once it has been build, and this includes a lot more than control. More precisely, the control system is designed to aid the operation of the plant. Operability is the ability of the plant (together with its control system) to achieve acceptable operation (both statically and dynamically). Operability includes flexibility, switchability and controllability as well as many other issues.

Flexibility refers to the ability to obtain feasible steady-state operation at a given set of operating points. This is a steady-state issue, and we will assume it to be satisfied at the operating points we consider. It is not considered any further in this paper.

Switchability refers to the ability to go from one operating point to another in an acceptable manner usually with emphasis on feasibility. It is not considered explicitly in this paper.

Optimal operation usually refers to the nominally optimal way of operating a plant as it would result by applying steady-state and/or dynamic optimization to a model of the plant (with no uncertainty), attempting to minimize the cost index $J$ by adjusting the degrees of freedom. We have here assumed that the ``quality (goodness) of operation'' can be quantified in terms of a scalar performance index (objective function) $J$, which should be minimized. For example, $J$ can be the operating costs.

In practice, we cannot obtain optimal operation due to uncertainty. The difference between the actual value of the objective function $J$ and its nominally optimal value is the loss.

The two main sources of uncertainty are (1) signal uncertainty (includes disturbances $d$ and measurement noise $n$) and (2) model uncertainty.

Robust means insensitive to uncertainty. Robust optimal operation is the optimal way of operating a plant (with uncertainty considerations included).

Integrated optimization and control (or optimizing control) refers to a system where optimization and its control implementation are integrated. In theory, it should be possible to obtain robust optimal operation with such a system. In practice, one often uses an hierarchical decomposition with separate layers for optimization and control. In making this split we assume that for the control system the goal of ``acceptable operation'' has been translated into ``keeping the controlled variables ($c$) within specified bounds from their setpoints ($c_s$)''. The optimization layer sends setpoint values ($c_s$) for selected controlled variables ($c$ to the control layer. The setpoints are updated only periodically. (The tasks, or parts of the tasks, in either of these layers may be performed by humans.) The control layer may be further divided, e.g. into supervisory control and regulatory control. In general, in a hierarchical system, the lower layers work on a shorter time scale.

In addition to keeping the controlled variables at their setpoints, the control system must ``stabilize'' the plant. We have here put stabilize in quotes because we use the word in an extended meaning, and include both modes which are mathematically unstable as well as slow modes (``drift'') that need to be ``stabilized'' from an operator point of view. Usually, stabilization is done within a separate (lower) layer of the control system, often called the regulatory control layer. The controlled variables for stabilization are measured output variables, and their setpoints may be used as degrees of freedom for the layers above.

For each layer in a control system we use the terms controlled output ($y$ with setpoint $y_s$) and manipulated input ($u$). Correspondingly, the term ``plant'' refers to the system to be controlled (with manipulated variables $u$ and controlled variables $y$). The layers are often structured hierarchically, such that the manipulated input for a higher layer ($u_1$) is the setpoint for a lower layer ($y_{2s}$), i.e. $y_{2s} = u_1$. (These controlled variables need in general not be measured variables, and they may include some of the manipulated variables ($u$).)

From this we see that the terms ``plant'', ``controlled output'' ($y$) and ``manipulated input'' ($u$) takes on different meaning depending on where we are in the hierarchy. To avoid confusion, we reserve special symbols for the variables at the top and bottom of the hierarchy. Thus, as already mentioned, the term process is often used to denote the uncontrolled plant as seen from the bottom of the hierarchy. Here the manipulated variables are the physical manipulators (e.g. valve positions), and are denoted $m$, i.e. $u=m$ in the bottom ``regulatory'' control layer. Correspondingly, at the top of the hierarchy, we use the symbol $c$ to denote the controlled variables for which the setpoint values ($c_s$) are determined by the optimization layer, i.e. $y=c$ in the top ``supervisory'' control layer

(Input-Output) Controllability of a plant is the ability to achieve acceptable control performance, that is, to keep the controlled variables ($y$) within specified bounds from their setpoints ($r$), in spite of signal uncertainty (disturbances $d$, noise $n$) and model uncertainty, using available inputs ($u$) and available measurements. In other words, the plant is controllable if there exists a controller which satisfies the control objectives.

This definition of controllability may be applied to the control system as a whole, or to parts of it (in the case the control layer is structured). The term controllability generally assumes that we use the best possible multivariable controller, but we may impose restrictions on the class of allowed controllers (e.g. consider ``controllability with decentralized PI control'').

A plant is self-regulating if we with constant inputs can keep the controlled variables within acceptable bounds. (Note that this definition may be applied to any layer in the control system, so the plant may be a partially controlled process). ``True'' self-regulation is defined as the case where no control is ever needed at the lowest layer (i.e. $m$ is constant). It relies on the process to dampen the disturbances itself, e.g. by having large buffer tanks. We rarely have ``true'' self-regulation because it may be very costly.

Self-optimizing control is when an acceptable loss can be achieved using constant setpoints for the controlled variables (without the need to reoptimize when disturbances occur). ``True'' self-optimization is defined as the case where no re-optimization is ever needed (so $c_s$ can be kept constant always), but this objective is usually not satisfied. On the other hand, we must require that the process is ``self-optimizable'' within the time period between each re-optimization, or else we cannot use separate control and optimization layers.

A process is self-optimizable if there exists a set of controlled variables ($c$) such that if we with keep constant setpoints for the optimized variables ($c_s$), then we can keep the loss within an acceptable bound within a specified time period. A steady-state analysis is usually sufficient to analyze if we have self-optimality. This is based on the assumption that the closed-loop time constant of the control system is smaller than the time period between each re-optimization (so that it settles to a new steady-state) and that the value of the objective function ($J$) is mostly determined by the steady-state behavior (i.e. there is no ``costly'' dynamic behavior e.g. imposed by poor control).

General reviews and books on plantwide control

We here presents a brief review of some of the previous reviews and books on plantwide control.

Morari (1982) presented a well-written review on plantwide control, where he discusses why modern control techniques were not (at that time) in widespread use in the process industry. The four main reasons were believed to be

  1. Large scale system aspects.
  2. Sensitivity (robustness).
  3. Fundamental limitations to control quality.
  4. Education.

He then considered two ways to decompose the problem:

  1. Multi-layer (vertical), where the difference between the layers are in the frequency of adjustment of the input.
  2. Horizontal decomposition, where the system is divided into noninteracting parts.

Stephanopoulos (1982) stated that the synthesis of a control system for a chemical plant is still to a large extent an art. He asked: ``Which variables should be measured in order to monitor completely the operation of a plant? Which input should be manipulated for effective control? How should measurements be paired with the manipulations to form the control structure, and finally, what the control laws are?'' He noted that the problem of plantwide control is ``multi-objective'' and ``there is a need for a systematic and organized approach which will identify all necessary control objectives''. The article is comprehensive, and discusses many of the problems in the synthesis of control systems for chemical plants.

Rinard and Downs (1992) review much of the relevant work in the area of plantwide control, and they also refer to important papers that we have not referenced. They conclude the review by stating that ``the problem probably never will be solved in the sense that a set of algorithms will lead to the complete design of a plantwide control system''. They suggest that more work should be done on the following items: (1) A way of answering whether or not the control system will meet all the objectives, (2) Sensor selection and location (where they indicate that theory on partial control may be useful), (3) Processes with recycle. They also welcome computer-aided tools, better education and good new test problems.

The book by Balchen and Mummé (1988) attempts to combine process and control knowledge, and to use this to design control systems for some common unit operations and also consider plantwide control. The book provides many practical examples, but there is little in terms of analysis tools or a systematic framework for plantwide control.

The book ``Integrated process control and automation'' by Rijnsdorp (1991), contains several subjects that are relevant here. Part II in the book is on optimal operation. He distinguishes between two situations, sellers marked (maximize production) and buyers marked (produce a given amount at lowest possible cost). He also has a procedure for design of a optimizing control system.

van de Wal and de Jager (1995) list several criteria for evaluation of control structure design methods: generality, applicable to nonlinear control systems, controller-independent, direct, quantitative, efficient, effective, simple and theoretically well developed. After reviewing they conclude that such a method does not exist.

The book by Skogestad and Postlethwaite (1996) has two chapters on controllability analysis, and one chapter on control structure design (Chapter 10) where they discuss topics related to partial control and self-optimizing control (although they did not use that term).

The planned monograph by Ng and Stephanopoulos (1998a) deals almost exclusively with plantwide control.

The book by Luyben et al. (1998) has collected much of Luybens practical ideas and summarized them in a clear manner. The emphasis is on case studies.

There also exists a large body of system-theoretic literature within the field of large scale systems, but most of it has little relevance to plantwide control. One important exception is the book by Findeisen et al. (1980) on ``Control and coordination in hierarchical systems'' which probably deserves to be studied more carefully by the process control community.


Control Structure Design (The mathematically oriented approach)

In this section we look at the mathematically oriented approach to plantwide control.

Structural methods

There are some methods that use structural information about the plant as a basis for control structure design. For a recent review of these methods we refer to the planned monograph of Ng and Stephanopoulos (1998a). Central concepts are structural state controllability, observability and accessibility. Based on this, sets of inputs and measurements are classified as viable or non-viable. Although the structural methods are interesting, they are not quantitative and usually provide little information other than confirming insights about the structure of the process that most engineers already have.

In the reminder of this section we discuss the five tasks of the control structure design problem, listed in the introduction.


Selection of controlled variables ($c$)

By ``controlled variables'' we here refer to the controlled variables $c$ for which the setpoints $c_s$ are determined by the optimization layer. There will also be other (internally) controlled variables which result from the decomposition of the controller into blocks or layers (including controlled measurements used for stabilization), but these are related to the control configuration selection, which is discussed as part of task 4.

The issue of selection of controlled variables, is probably the least studied of the tasks in the control structure design problem. In fact, it seems from our experience that most people do not consider it as an important issue. Therefore, the decision has mostly been based on engineering insight and experience, and the validity of the selection of controlled variables has seldom been questioned by the control theoretician.

To see that the selection of output is an issue, ask the question:

Why are we controlling hundreds of temperatures, pressures and compositions in a chemical plant, when there is no specification on most of these variables?
After some thought, one realizes that the main reason for controlling all these variables is that one needs to specify the available degrees of freedom in order to keep the plant close to its optimal operating point. But there is a follow-up question:
Why do we select a particular set $c$ of controlled variables? (e.g., why specify (control) the top composition in a distillation column, which does not produce final products, rather than just specifying its reflux?)
The answer to this second question is less obvious, because at first it seems like it does not really matter which variables we specify (as long as all degrees of freedom are consumed, because the remaining variables are then uniquely determined). However, this is true only when there is no uncertainty caused by disturbances and noise (signal uncertainty) or model uncertainty. When there is uncertainty then it does make a difference how the solution is implemented, that is, which variables we select to control at their setpoints.

(Maarleveld and Rijnsdrop, 1970)), Morari et al. (1980), Skogestad and Postlethwaite (1996) (Chapter 10.3), Skogestad (2000) and Zheng et al. (1999) propose to base the selection of controlled variables based on considering the overall operational objective. The overall objective may be formulated as a scalar cost function $J$ which should be minimized subject to set of operational constraints. (Maarleveld and Rijnsdrop, 1970)) found that in many cases all the degrees of freedom are used to satisfy constraints, and the controlled variables should then simply be selected as the active constraints. For example, if it is optimal to keep the reactor temperature at its upper limit, then this should be selected as a controlled variable.

The more difficult case is if we have unconstrained degrees of freedom, for example, the optimal heat input when we bake a cake.

The basic idea of what we here call self-optimizing control was formulated about twenty years ago by Morari et al. (1980):

``in attempting to synthesize a feedback optimizing control structure, our main objective is to translate the economic objectives into process control objectives. In other words, we want to find a function $c$ of the process variables which when held constant, leads automatically to the optimal adjustments of the manipulated variables, and with it, the optimal operating conditions. [...] This means that by keeping the function $c(u,d)$ at the setpoint $c_s$, through the use of the manipulated variables $u$, for various disturbances $d$, it follows uniquely that the process is operating at the optimal steady-state.''
If we replace the term ``optimal adjustments'' by ``acceptable adjustments (in terms of the loss)'' then the above is a precise description of what Skogestad (2000) denote a self-optimizing control structure. The only factor Morari et al. (1980) fail to consider is the effect of the implementation error $c - c_s$. Morari et al. (1980) propose to select the best set of controlled variables based on minimizing the loss (``feedback optimizing control criterion 1''). The relationship to the work of Shinnar is discussed separately later.

Somewhat surprisingly, the ideas of Morari et al. (1980) received very little attention, at least during the first 20 years after their publication. One reason is probably that the paper also dealt with the issue of finding the optimal operation (and not only on how to implement it), and another reason is that the only example in the paper happened to result in a optimal solution where all degrees of freedom were used to satisfy constraints. The follow-up paper by Arkun and Stephanopoulos (1980) concentrated further on the constrained case and tracking of active constraints.

Skogestad and Postlethwaite (1996) (Chapter 10.3) presents an approach for selecting controlled output similar to those of Morari et al. (1980) and the ideas where further developed in (Skogestad, 2000) where the term self-optimizing control is introduced. Skogestad (2000) stresses the need to consider the implementation error when evaluating the loss. Skogestad (2000) gives four requirements that a controlled variable should meet: 1) Its optimal value should be insensitive to disturbances. 2) It should be easy to measure and control accurately. 3) Its value should be sensitive to changes in the manipulated variables. 4) For cases with two or more controlled variables, the selected variables should not be closely correlated. By scaling of the variables properly, Skogestad and Postlethwaite (1996) shows that the self-optimizing control structure is related to maximizing the minimum singular value of the gain matrix $G$, where $\Delta c = \Delta
u$. Zheng et al. (1999) also use the ideas of Morari et al. (1980) as a basis for selecting controlled variables.

In his book Rijnsdorp (1991) gives on page 99 a stepwise design procedure for designing optimizing control systems for process units. One step is to ``transfer the result into on-line algorithms for adjusting the degrees of freedom for optimization''. He states that this ``requires good process insight and control structure know-how. It is worthwhile basing the algorithm as far as possible on process measurements. In any case, it is impossible to give a clear-cut recipe here.''

Fisher et al. (1988) discuss plant economics in relation to control. They provide some interesting heuristic ideas. In particular, hidden in their HDA example in part 3 (p. 614) one finds an interesting discussion on the selection of controlled variables, which is quite closely related to the ideas of Morari et al. (1980).

Luyben (1988) introduced the term ``eigenstructure'' to describe the inherently best control structure (with the best self-regulating and self-optimizing property). However, he did not really define the term, and also the name is unfortunate since ``eigenstructure'' has a another unrelated mathematical meaning in terms of eigenvalues. Apart from this, Luyben and coworkers (e.g. Luyben (1975), Yi and Luyben (1995)) have studied unconstrained problems, and some of their ideas are related to self-optimizing control. However, Luyben proposes to select controlled variables which minimizes the steady-state sensitive of the manipulated variable ($u$) to disturbances, i.e. to select controlled variables ($c$) such that $(\partial u/ \partial d)_{c}$ is small, whereas one should really minimize the steady-state sensitivity of the economic loss ($L$) to disturbances, i.e. to select controlled variables ($c$) such that $(\partial L/ \partial d)_{c}$ is small.

Narraway et al. (1991), Narraway and Perkins (1993) and Narraway and Perkins (1994)) strongly stress the need to base the selection of the control structure on economics, and they discuss the effect of disturbances on the economics. However, they do not formulate any rules or procedures for selecting controlled variables.

In a study of the Tenessee Eastman challenge problem, Ricker (1996) notes that when applying both MPC and decentralized methods, one needs to make critical decisions without quantitative justifications. The foremost of these is the selection of the controlled variables, and he found existing quantititative methods for their selection to be inadequate. Ricker (1995) state that the controlled variables ``must be carefully chosen; arbitrary use of feedback control loops should be avoided''.

Finally, Mizoguchi et al. (1995) and Marlin and Hrymak (1997) stress the need to find a good way of implementing the optimal solution in terms how the control system should respond to disturbances, ``i.e. the key constraints to remain active, variables to be maximized or minimized, priority for adjusting manipulated variables, and so forth.'' They suggest that an issue for improvement in today's real-time optimization systems is to select the control system that yields the highest profit for a range of disturbances that occur between each execution of the optimization.

There has also been done some work on non-square plants, i.e. with more outputs than inputs, e.g. (Cao, 1995) and (Chang and Yu, 1990). These works assume that the control goal is the keep all the output variables at given setpoints, and often the effect of disturbances is not considered. It may be more suitable to define the cost function $J$ for the operation and reformulate these problems into the framework of self-optimizing control.

Selection of manipulated variables ($m$)

By manipulated variables we refer to the physical degrees of freedom, typically the valve positions or electric power inputs. Actually, selection of these variables is usually not much of an issue at the stage of control structure design, since these variables usually follow as direct consequence of the design of the process itself.

However, there may be some possibility of adding valves or moving them. For example, if we install a bypass pipeline and a valve, then we may use the bypass flow as an extra degree of freedom for control purposes.

Finally, let us make it clear that the possibility of not actively using some manipulated variables (or only changing them rarely), is a decision that is included above in ``selection of controlled variables''.

Selection of measurements ($v$)

Controllability considerations, including dynamic behavior, are important when selecting which variables to measure. There are often many possible measurements we can make, and the number, location and accuracy of the measurement is a tradeoff between cost of measurements and benefits of improved control. A controllability analysis may be very useful. In most cases the selection of measurements must be considered simultaneously with the selection of the control configuration. For example, this applies to the issue of stabilization and the use of secondary measurements.

Selection of control configuration

The issue of control configuration selection, including multiloop (decentralized) control, is discussed in Hovd and Skogestad (1993) and in sections 10.6, 10.7 and 10.8 of Skogestad and Postlethwaite (1996), and we will here discuss mainly issues which are not covered there.

The control configuration is the structure of the controller $K$ that interconnects the measurements, setpoints $c_s$ and manipulated variables $m$. The controller can be structured (decomposed) into blocks both in an vertical (hierarchical) and horizontal (decentralized control) manner.

Why, instead of finding the truly optimal centralized controller, is the controller decomposed? (1) The first reason is that it may require less computation. This reason may be relevant in some decision making systems where there is limited capacity for transmitting and handling information (like in most systems where humans are involved), but it does not hold in todays chemical plant where information is centralized and computing power is abundant. Two other reasons often given are (2) failure tolerance and (3) the ability of local units to act quickly to reject disturbances (e.g. Findeisen et al., 1980). These reasons may be more relevant, but, as pointed out by Skogestad and Hovd (1995) there are probably other even more fundamental reasons. The most important one is probably (4) to reduce the cost involved in defining the control problem and setting up the detailed dynamic model which is required in a centralized system with no predetermined links. Also, (5) decomposed control systems are much less sensitive to model uncertainty (since they often use no explicit model). In other words, by imposing a certain control configuration, we are implicitly providing process information, which we with a centralized controller would need to supply explicitly through the model.

Stabilizing control

Instability requires the active use of manipulated variables ($m$) using feedback control. There exist relatively few systematic tools to assist in selecting a control structure for stabilizing control. Usually, single-loop controllers are used for stabilization, and issues are which variables to measure and which manipulated variables to use. One problem in stabilization is that measurement noise may cause large variations in the input such that it saturates. Havre and Skogestad (1996, 1998) have shown that the pole vectors may be used to select measurements and manipulated variables such that this problem is minimized.

Secondary measurements

Extra (secondary) measurements are often added to improve the control. Three alternatives for use of extra measurements are:

  1. Centralized controller: All the measurements are used to compute the optimal input. This controller has implicitly an estimator (model) hidden inside it.
  2. Inferential control: Based on the measurements a model is used to provide an estimate of the primary output (e.g. a controlled output $c$). This estimate is send to a separate controller.
  3. Cascade control: The secondary measurements are controlled locally and their setpoints are used as degrees of freedom at some higher layer in the hierarchy.

Note that both centralized and inferential control uses the extra measurements to estimate parameters in a model, whereas in cascade control they are used for additional feedback. The subject of estimation and measurements selection for estimation is beyond the scope of this review article; we refer to Ljung (1987) for a control view and to Martens (1989) for a chemometrics approach to this issue. However, we would like point out that the control system should be designed for best possible control of the primary variables ($c$), and not the best possible estimate. A drawback of the inferential scheme is that estimate is used in feed-forward manner.

For cascade control Havre (1998) has shown how to select secondary measurements such that the need for updating the setpoints is small. The issues here are similar to that of selecting controlled variables ($c$) discussed above. One approach is to minimize some norm of the transfer function from the disturbance and control error in the secondary variable to the control error in the primary variable. A simpler, but less accurate, alternative is to maximize the minimum singular value in the transfer function from secondary measurements to the input used to control the secondary measurements. Lee and Morari ((Lee and Morari, 1991), (Lee et al., 1995) and (Lee et al., 1997)) use a more rigorous approach where model uncertainty is explicitly considered and the structured singular value is used as a tool.

Partial control

Most control configurations are structured in a hierarchical manner with fast inner loops, and slower outer loops that adjust the setpoints for the inner loops. Control system design generally starts by designing the inner (fast) loops, and then outer loops are closed in a sequential manner. Thus, the design of an ``outer loop'' is done on a partially controlled system. We here provide some simple but yet very useful relationships for partially controlled systems. We divide the outputs into two classes:

We have inserted the word temporarily above, since $y_1$ is normally a controlled output at some higher layer in the hierarchy. We also subdivide the available manipulated variables in a similar manner:

A block diagram of the partially controlled system resulting from closing the loop involving $u_2$ and $y_2$ with the local controller $K_2$ is shown in Figure [*].

Figure: Block diagram of a partially controlled plant
\begin{figure}\centerline {%%
<tex2html_file> ...

Skogestad and Postlethwaite (1996) distinguish between the following four cases of partial control:

    Meas./Control Control objective
    of $y_1$ ? for $y_2$ ?
I Indirect control No No
II Sequential cascade control Yes No
III ``True'' partial control No Yes
IV Sequential decentralized control Yes Yes

In all cases there is a control objective associated with $y_1$ and a measurement of $y_2$. For example, for indirect control there is no separate control objective on $y_2$, the reasons we control $y_2$ is to indirectly achieve good control of $y_1$ which are not controlled. The first two cases are probably the most important as they are related to vertical (hierarchical) structuring. The latter two cases (where $y_2$ has its own control objective so that the setpoints $y_{2s}$ are not adjustable) gives a horizontal structuring.

In any case, the linear model for the plant can be written

$\displaystyle y_1 = G_{11}(s) u_1 + G_{12}(s)u_2 + G_{d1}(s)d$     (1)
$\displaystyle y_2 = G_{21}(s) u_1 + G_{22}(s)u_2 + G_{d2}(s)d$     (2)

To derive transfer functions for the partially controlled system we simply solve ([*]) with respect to $u_2$ (assuming that $G_{22}(s)$ is square an invertible at a given value of $s$)3


\begin{displaymath}
u_2 = G_{22}^{-1}(s) \( y_2-G_{21}(s)u_1-G_{d2}(s) d \) \end{displaymath} (3)

Substituting ([*]) into ([*]) then yields (Havre and Skogestad, 1996a)
\begin{displaymath}
\fbox{$y_1 = P_u(s)u_1 + P_d(s)d + P_y(s)y_2$} \end{displaymath} (4)

which is the model with $u_2$ formally replaced by $y_2$ as an independent variable, and
$\displaystyle P_u(s)$ $\textstyle \buildrel \rm def \over =$ $\displaystyle G_{11}(s)-G_{12}G_{22}^{-1}G_{21}(s)$ (5)
$\displaystyle P_d(s)$ $\textstyle \buildrel \rm def \over =$ $\displaystyle G_{d1}(s)-G_{12}G_{22}^{-1}G_{d2}(s)$ (6)
$\displaystyle P_y(s)$ $\textstyle \buildrel \rm def \over =$ $\displaystyle G_{12}G_{22}^{-1}(s)$ (7)

Here $P_d$ is the partial disturbance gain, $P_y$ is the gain from $y_2$ to $y_1$, and $P_u$ is the partial input gain from the unused inputs $u_1$ (with $y_2$ constant). If we look more carefully at ([*]) then we see that the matrix $P_d$ gives the effect of disturbances on the primary outputs $y_1$, when the manipulated variables $u_2$ are adjusted to keep $y_2$ constant, which is consistent of the original definition of the partial disturbance gain given by Skogestad and Wolff (1992). Note that no approximation about perfect control has been made when deriving ([*]). Equation ([*]) applies for any fixed value of $s$ (on a frequency-by-frequency basis, $s=j\omega $).

The above equations are simple yet very useful. Relationships containing parts of these expressions have been derived by many authors, e.g. see the work of Manousiouthakis et al. (1986) on block relative gains and the work of Häggblom and Waller (1988) on distillation control configurations.

Note that this kind of analysis can be performed at each layers in the control system. At the top layer we may sometimes assume that the cost $J$ is a function of the variables $y_1$ (this is the approach of Shinnar (1981)), and we can then interpret $y_2$ as the set of controlled variables $c$. If $c$ is never adjusted then this is a special case of indirect control, and if $c$ is adjusted at regular intervals (as is usually done) then this may be viewed as a special case of sequential cascade control.


The Process Oriented Approach

We here review procedures for plantwide control that are based on using process insight, that is, methods that are unique to process control.

The first comprehensive discussion on plantwide control was given by Page Buckley in his book ``Techniques of process control'' in a chapter on Overall process control (Buckley, 1964). The chapter introduces the main issues, and presents what is still in many ways the industrial approach to plantwide control. In fact, when reading this chapter, 35 years later one is struck with the feeling that there has been relatively little development in this area. Some of the terms which are introduced and discussed in the chapter are material balance control (in direction of flow, and in direction opposite of flow), production rate control, buffer tanks as low-pass filters, indirect control, and predictive optimization. He also discusses recycle and the need to purge impurities, and he points out that you cannot at a given point in a plant control inventory (level, pressure) and flow independently since they are related through the material balance. In summary, he presents a number of useful engineering insights, but there is really no overall procedure. As pointed out by Ogunnaike (1995) the basic principles applied by the industry does not deviate far from Buckley (1964).

Wolff and Skogestad (1994) review previous work on plantwide control with emphasis on the process-oriented decomposition approaches. They suggest that plantwide control system design should start with a ``top-down'' selection of controlled and manipulated variables, and proceed with a ``bottom-up'' design of the control system. At the end of the paper ten heuristic guidelines for plantwide control are listed.

There exists other more or less heuristics rules for process control; e.g. see Hougen and Brockmeier (1969) and Seborg et al. (1995).


Degrees of freedom for control and optimization

A starting point for plantwide control is to establish the number of degrees of freedom for operation. Surprisingly, this is an area where there still seems to be some confusion. We may distinguish between dynamic degrees of freedom (for control) and steady-state degrees of freedom. We define:

$N_m$
Degrees of freedom for control: The number of variables (temperatures, pressures, levels etc.) that may be set by the control system.
$N_{ss}$
Degrees of freedom at steady state: The number of independent variables with a steady state effect.

Many authors suggest to use the process model to find the degrees of freedom. The number of degrees of freedom is then the number of equations minus the number of variables. However, this approach will be prone to errors, as it is easy to write too many or too few equations.

Fortunately, it is in most cases relatively straightforward to establish these numbers from process insight: The degrees of freedom for control, $N_m$, equal the number of adjustable valves plus the number of other adjustable electrical and mechanical variables (electric power, etc.). According to (Skogestad, 2000) the number of degrees at freedom at steady-state, can then be found by subtracting the number of variables with no steady state effects, $N_{ss} = N_{m0}+N_{y0}$. Here

$N_{m0}$
is the number of manipulated variables, or combinations thereof, with no steady-state effect.
$N_{y0}$
is the number of manipulated variables that are used to control variables with no steady-state effect.
The latter usually equals the number of liquid levels with no steady-state effect, including most buffer tank levels. However, note that some liquid levels do have a steady-state effect, such as the level in a non-equilibrium liquid phase reactor, and levels associated with adjustable heat transfer areas. Also, we should not include in $N_{y0}$ any liquid holdups that are left uncontrolled, such as internal stage holdups in distillation columns.

Thus, $N_{y0}$ is nonzero for most chemical processes, whereas we often have $N_{m0}=0$. A simple example where $N_{m0}$ is non-zero is a heat exchanger with bypass on both sides, (i.e. $N_m=2$). However, at steady-state $N_{ss}=1$ since there is really only one operational degree of freedom, namely the heat transfer rate $Q$ (which at steady-state may be achieved by many combinations of the two bypasses), so we have $N_{m0}=1$.

The optimization is generally subject to several constraints. First, there are generally upper and lower limits on all manipulated variables (e.g. fully open or closed valve). In addition, there are constraints on many dependent variables; due to safety (e.g. maximum pressure or temperature), equipment limitations (maximum throughput), or product specifications. Some of these constraints will be active at the optimum. The number of ``free'' unconstrained variables ``for steady-state optimization'', $N_{ss,free}$, is then equal to

\begin{displaymath}N_{ss,free} = N_{ss} - N_{active} \end{displaymath}

where $N_{active}$ is the number of active constraints. Note that the term ``left for optimization'' may be somewhat misleading, since the decision to keep some constraints active, really follows as part of the optimization; thus all $N_{ss}$ variables are really used for optimization.

Ponton (1994) proposes a rule for finding $N_{ss}$ by counting the number of streams and subtracting the number of ``extra'' phases (i.e. if there are more than one phase present in the unit). However, it is easy to construct simple examples where the rule fails. For example, consider a simple liquid storage tank (0 extra phases) with one inflow and one outflow (2 streams). According to the rule, we have $N_{ss} = 2$, but we know $N_{ss}=1$ since inflow must equal outflow at steady state.

Remark on design degrees of freedom. Above we have discussed operational degrees of freedom for control and optimization. The design degrees of freedom (which is not really a concern of this paper) include, in addition to some of the $N_{ss}$ steady-state operational degrees of freedom, all parameters related to the size of the equipment, such as the number of stages in column sections, area of heat exchangers, etc.

Luyben (1996) claims that the ``design degrees of freedom is equal to the number of control degrees for an important class of processes.'' This is clearly not true, as there is no general relationship between the two numbers. For example, consider a heat exchanger between two streams. Then there may be zero, one or two control degrees of freedom (depending on the number of bypasses), but there is always one design degree of freedom (the heat exchanger area).

Production rate

Identifying the major disturbances is very important in any control problem, and for process control the production rate (throughput) is often the main disturbance. In addition, the location of where the production rate is actually set (``throughput manipulator''), usually determines the control structure for the inventory control of the various units (Buckley, 1964). For a plant running at maximum capacity, the production rate is set at its bottleneck, which is usually inside the plant (e.g. caused by maximum capacity of a heat exchanger or a compressor). Then, downstream of this location the plant has to process whatever comes in (given feed rate), and upstream of this location the plant has to produce the desired quantity (given product rate). To avoid any ``long loops'', it is preferably to use the input flow for inventory control upstream the location where the production rate is set, and to use the output flow for inventory control downstream this location.

From this it follows that it is critical to know where in the plant the production rate is set. In practice, the location may vary depending on operating conditions. This may require reconfiguring of many control loops, but often supervisory control systems, such as model predictive control, provide a simpler and better solution.

The concepts of partial control and dominat variables

Shinnar (1981) introduced the following sets of variables The goal is to maintain $Y_p$ within prescribed limits and to achieve this goal ``we choose in most cases a small set $Y_{cd}$ and try to keep these at a fixed set of values by manipulating $U_d$'' (later, Arbel et al. (1996) introduced the term ``partial control'' to describe this idea).

Shinnar notes that the overall control algorithm can normally be decomposed into a dynamic control system (which adjust $U_d$) and a steady-state control which determines the set points of $Y_{cd}$ as well as the values of $U_s$ (the latter are the manipulations which can only be changed slowly), and that we ``look for a set $Y_{cd},U_d$ that contains variables that have a maximum compensating effect on $Y_p$''. If one translates the words and notation, then one realizes that Shinnar's idea of ``partial control'' is very close to the idea of ``self-optimizing control'' presented in Morari et al. (1980), Skogestad and Postlethwaite (1996), and Skogestad (2000). The difference is that Shinnar assumes that there exist at the outset a set of ``primary'' variables $Y_p$ that need to be controlled, whereas in self-optimizing control the starting point is an economic cost function that should be minimized.

The authors provide some intuitive ideas and examples for selecting dominant variables which may be useful in some cases, especially when no model information is available. However, it is not clear how helpful the idea of ``dominant'' variable is, since they are not really defined and no explicit procedure is given for identifying them. Indeed, Arbel et al. (1996) write that ``the problems of partial control have been discussed in a heuristic way'' and that ``considerably further research is needed to fully understand the problems is steady-state control of chemical plants''. Tyreus (1999b) provides some additional ideas on how to select dominant variables, partly based on the extensive variable idea of Georgakis (1986) and the thermodynamic ideas of Ydstie, (Alonso and Ydstie, 1996). It should also be added that the optimal control strategy will depend on the optimal way of operating the plant, and therefore on cost data, whereas thermodynamics is independent of cost. It is therefore clear that thermodynamics can at the most provide guidelines, and never a final answer on control structure design.

Decomposition of the problem

The task of designing a control system for complete plants is a large and difficult task. Therefore most methods will try to decompose the problem into manageable parts. Four common ways of decomposing the problem are

  1. Decomposition based on process units
  2. Decomposition based on process structure
  3. Decomposition based on control objectives (material balance, energy balance, quality, etc.)
  4. Decomposition based on time scale
The first is a horizontal (decentralized) decomposition whereas the latter three provide hierarchical decompositions. Most practical approaches contain elements from several categories.

Many of the methods described below suggest to perform the optimization at the end of the procedure, after checking if there are degrees of freedom left. However, as discussed above, it is possible to identify the steady-state degrees of freedom initially and perform an optimization to identify controlled variables ($c$'s) that achieve self-optimizing control (a ``top-down approach''), and afterwards to design ``bottom-up'' a control system which, in addition to satisfying other objectives, is able to control these variables at their setpoints. This is the approach we advocate.

It is also interesting to see how the methods differ in terms of the importance assigned to inventory (level) control. Some regard inventory control as the most important (as is probably correct when viewed purely from a operational point of view) whereas Ponton (1994) states that ``inventory should normally be regarded as the least important of all variables to be regulated'' (which is correct when viewed from a design point of view). We feel that there is a need to integrate the viewpoints of the control and design people.

The unit based approach

The unit-based approach, suggested by Umeda et al. (1978), proposes to

  1. Decompose the plant into individual unit of operations
  2. Generate the best control structure for each unit
  3. Combine all these structures to form a complete one for the entire plant.
  4. Eliminate conflicts among the individual control structures through mutual adjustments.

This approach has always been widely used in industry, and has its main advantage that many effective control schemes have been established over the years for individual units (e.g. Shinskey (1988)). However, with an increasing use of material recycle, heat integration and the desire to reduce buffer volumes between units, this approach may result in too many conflicts and become impractical.

As a result, one has to shift to plantwide methods, where a hierarchical decomposition is used. The first such approach was Buckley's (1964) division of the control system into material balance control and product quality control, and three plantwide approaches, partly based on his ideas, are described in the following.

Hierarchical decomposition based on process structure

The hierarchy given in Douglas (1988) for process design starts at a crude representation and gets more detailed:

Level 1
Batch vs continuous
Level 2
Input-output structure
Level 3
Recycle structure
Level 4
General structure of separation system
Level 5
Energy interaction
Fisher et al. (1988) propose to use this hierarchy when performing controllability analysis, and Ponton and Laing (1993) point out that this hierarchy, (e.g. level 2 to level 5) could also be used for control system design. This framework enables parallel development for the process and the control system. Within each of the levels above any design method might be applied.

Ng and Stephanopoulos (1998 b) propose to use a similar hierarchy for control structure design. The difference between Douglas (1988) and Ng and Stephanopoulos (1998 b)'s hierarchy is that level 1 is replaced by a preliminary analysis and that levels 4 and 5 are replaced by more detailed structures. At each step the objectives identified at an earlier step is translated to this level and new objectives are identified. The focus is on construction of mass and energy balance control. The method is applied to the Tennessee Eastman case.

All these methods have in common that at each step (level), a key point is to check if there remain enough manipulated variables to meet the constraints and to optimize operation. The methods are easy to follow and give a good process understanding, and the concept of a hierarchical view is possible to combine with almost any design method.

Hierarchical decomposition based on control objectives

The hierarchy based on control objectives is sometimes called the tiered procedure. This bottom-up procedure focuses on the tasks that the controller has to perform. Normally one starts by stabilizing the plant, which mainly involves placing inventory (mass and energy) controllers.

Price et al. (1993) build on the ideas that was introduced by Buckley (1964) and they introduce a tiered framework. The framework is divided into four different tasks:

I
Inventory and production rate control.
II
Product specification control
III
Equipment & operating constraints
IV
Economic performance enhancement.
Their paper does not discuss points III or IV. They perform a large number (318) of simulations with different control structures, controllers (P or PI), and tunings on a simple process consisting of a reactor, separator and recycle of unreacted reactant. The configurations are ranked based on integrated absolute error of the product composition for steps in the disturbance. From these simulation they propose some guidelines for selecting the through-put manipulator and inventory controls. (1) Prefer internal flows as through-put manipulator. (2) the through-put manipulator and inventory controls should be self-consistent (self-consistency is fulfilled when a change in the through-put propagates through the process by ``itself'' and does not depend on composition controllers). They apply their ideas on the Tennessee Eastman problem (Price et al., 1994).

Ricker (1996) comments on the work of Price et al. (1994) and points out that plants are often run at full capacity, corresponding to constraints in one or several variables. If a manipulated variable used for level control saturates, one looses a degree of freedom for maximum production. This should be considered when choosing a through-put manipulator.

Luyben et al. (1997) point out three limitations of the approach of Buckley. First, he did not explicitly discuss energy management. Second, he did not look at recycles. Third, he placed emphasis on inventory control before quality control. Their plantwide control design procedure is listed below:

  1. Establish control objectives.
  2. Determine the control degrees of freedom by counting the number of independent valves.
  3. Establish energy inventory control, for removing the heats of reactions and to prevent propagation of thermal disturbances.
  4. Set production rate. The production rate can only be increased by increasing the reaction rate in the reactor. One recommendation is to use the input to the separation section.
  5. Product quality and safety control. Here they recommend the usual ``pair close''-rule.
  6. Inventory control. Fix a flow in all liquid recycle loops. They state that all liquid levels and gas pressures should be controlled.
  7. Check component balances. (After this point it might bee necessary to go back to item 4.)
  8. Unit operations control.
  9. Use remaining control degrees of freedom to optimize economics or improve dynamic controllability.
They apply their procedure on several test problems; the vinyl acetate monomer process, the Tennessee Eastman process, and the HDA process.

Step 3 comes before determining the throughput manipulator, since the reactor is typically the heart of the process and the methods for heat removal are intrinsically part of the reactor design. In order to avoid recycling of disturbances they suggest to set a flow-rate in all recycles loops; this is discussed more in section [*]. They suggest in step 6 to control all inventories, but this may not be necessary in all cases; e.g. it may be optimal to let the pressure float (Shinskey, 1988). We recommend (see below) to combine steps 1 and 9, that is, the selection of controlled variables (control objectives) in step 1 should be based on overall plant economics.

McAvoy (1999) presents a method where the control objectives are divided into two categories: variables that ``must'' be controlled, and product flow and quality. His approach is to identify the set of inputs that minimizes valve movements. This is first solved for the ``must'' variables, then for product rate and quality. The optimization problem is simplified by using a linear stable steady state model. He gives no guidance into how to identify the controlled variables.


Hierarchical decomposition based on time scales

Buckley (1964) proposed to design the quality control system as high-pass filters for disturbances and to design the mass balance control system as low pass filters. If the resonance frequency of the quality control system is designed to be an order of magnitude higher than the break frequency of the mass balance system then the two loops will be non-interacting.

McAvoy and Ye (1994) divide their method into four stages:

  1. Design of inner cascade loops.
  2. Design of basic decentralized loops, except those associated with quality and production rate.
  3. Production rate and quality controls.
  4. Higher layer controls.
The decomposition in stages 1-3 is based on the speed of the loops. In stage 1 the idea is to locally reduce the effect of disturbances. In stage 2 there generally are a large number of alternative configurations. These may be screened using simple controllability tools, such as the RGA. One problem of selecting outputs based on a controllability analysis is that one may end up with the outputs that are easy to control, rather than the ones that are important to control. The method is applied to the Tennessee Eastman test problem.

Douglas (1988), at page 414, presents a hierarchy for control system design, based on steady-state, normal dynamic response and abnormal dynamic operation. Zheng et al. (1999) continue this work and place a greater attention on feasibility in face of constraints and on robust optimality (self-optimizing control). (Zheng and Mahajannam, 1999) propose to use minimum surge capacity as a dynamic cost.


The reactor, separator and recycle plant

A common feature of most plants is the presence of recycle. A simple example is distillation, with recycle (``reflux'') of liquid from the top of the column and of vapor from the bottom of the column.

In this section, we consider the reactor and separator process with recycle of unreacted feed from a reactor. This problem has lately been studied by many authors, e.g. (Papadourakis et al., 1987), (Wolff et al., 1992), (Price et al., 1993), (Luyben, 1994), (Luyben and Floudas, 1994), (Mizsey and Kalmar, 1996), (Wu and Yu, 1996), (Hansen, 1998), (Ng and Stephanopoulos, 1998a). It may be difficult to follow all the details in the case studies presented, so instead we aim in this section to gain some basic insight into the problem.

In the simplest case, let the reactor be a CSTR where component A is converted to a product and the amount converted is

\begin{displaymath}P=kz_AM \quad [mol A/s]\end{displaymath}

The unreacted A is separated from the product and recycled back to the reactor (for simplicity we will here assume perfect separation). To increase the conversion $P$ one then has three options:

  1. Increase the temperature in order to increase the reaction rate constant $k$ [s$^{-1}$].
  2. Increase the amount of recycle, which indirectly increases the fraction of A in the reactor, $z_A$ [mol A/mol].
  3. Increase the reactor holdup $M$ [mol]. (In a liquid phase system the reactor holdup is determined by the reactor level, and in a gas phase system by the reactor pressure.)

Here we will assume that the temperature is constant, so there are two options left. Since at steady-state with given product specifications the conversion of A in the reactor is given by the feed rate, it follows that the two remaining options are dependent, so if we control one variable, then the other variable will "float" and adjust itself.

Two common control strategies are then

(A)
Control the reactor holdup (and let the recycle flow float)
(B)
Control the recycle flow (and let the reactor holdup float).
In case (A) one may encounter the so-called "snowball effect" where the recycle goes to infinity. This occurs because at infinite recycle flow we have $z_A= 1$ which gives the highest possible production. In effect, the snowball effect occurs because the reactor is too small to handle the given feed rate, so it is really a steady-state design problem.

Luyben (1992, 1994) has studied liquid phase systems and has concluded that control strategy (B) (or a variant of it) with one flow fixed in the recycle loop should be used to avoid the "snowball effect".

Wu and Yu (1996) also study the snowball effect and propose to distribute the ``load'' evenly between the different units. In effect, they suggest to let the reactor volume vary and

(C)
Control the reactor composition.

However, from an economic point of view one should in most cases for liquid phase systems (including the one studied here) keep the reactor level at its maximum value. This maximizes the conversion per pass and results in the smallest possible recycle, which generally (unless byprodycts are formed) reduces the operational cost. Thus, the recommendation of Luyben (1992, 1994) and (Wu and Yu, 1996), has a steady-state economic penalty which it seems that most researchers have so far neglected.

On the other hand, for gas phase systems, there is usually an economic penalty from compression costs involved in increasing the reactor holdup (i.e. the reactor pressure), and strategy (B) where we let the holdup (pressure) float may in fact be economically optimal. Indeed, such schemes are used in industry, e.g. in ammonia plants. For example, for processes with gas recycle and purge, Fisher et al. (1988) recommends to keep the gas recycle constant at its maximum value.

Wolff et al. (1992) studied a similar plant. They included an inert component and looked on the effects of recycle on the controllability of the process. Their conclusion is that the purge stream flow should be used to control the composition of inert. They did not consider the reactor holdup as a possible controlled variable.

All the above works have in common that the authors are searching for the right controlled variables to keep constant (recycle flow, reactor volume, composition, etc.). However, a common basis for comparing the alternatives seems to be lacking. In terms of future work, we propose that one first needs to define clearly the objective function (cost) $J$ for the operation of the reactor system. Only when this is given, may one decide in a rigorous manner on the best selection of controlled variables, for example by using the idea of ``self-optimizing'' control and evaluating the loss.

Tennessee Eastman Process

Introduction to the test problem

The problem of Downs and Vogel (1993) was first proposed at an AIChE meeting in 1990 and has since been studied by many authors. The process has four feed streams, one product stream, and one purge stream to remove inert (B) and byproduct (F). The reactions are

A(g) + C(g) + D(g) $\rightarrow$ G(liq), Product 1,
A(g) + C(g) + E(g) $\rightarrow$ H(liq), Product 2,
A(g) + E(g) $\rightarrow$ F(liq), Byproduct,
3D(g) $\rightarrow$ 2F(liq), Byproduct,

Figure: Tennessee Eastman process flowsheet
\begin{figure}\begin{center}
<tex2html_file> ...

All reactions are irreversible, exothermic and temperature dependent via the Arrhenius expression. The process has five major units; a reactor, a product condenser, a vapor-liquid separator, a recycle compressor and a product stripper; see Figure [*]. There are 41 measurements and 12 manipulated variables. For a more detailed description see Downs and Vogel (1993)

Ricker (1995) considered the steady-state optimal operation of the plant. In all cases, he found that it is optimal to have maximum reactor pressure, minimum reactor level, maximum agitator speed, and minimum steam valve opening. Furthermore, in most cases it is optimal to use minimum compressor recycle valve opening.

McAvoy and Ye solution

McAvoy and Ye (1994) close at stage 1 inner cascade loops involving eight flows and two temperature. This reduces the effect of the disturbances associated with these loops. At stage 3 they use a simple mass balance of the plant. This gives some constraints for stage 2, for example, that either the C-feed or the product flow must be left for the third stage.

At stage 2 decentralized loops are closed. They start with the level loops since they are the most important loops. There are three level loops; reactor, separator and stripper, and they consider four possible level configurations. Three of the configurations were ruled out based on controllability analysis. The alternative where the E-feed is used for reactor level control is analyzed in greater detail. They look at three $6\mathtt{x}6$, eighteen $5\mathtt{x}5$, and fifteen $4\mathtt{x}4$ systems, where the controlled variables seem to be rather randomly chosen. After an analysis involving RGA, Niederlinski index and linear valve saturation, only four alternatives are left. These are further screened on their steady-state behavior for a range of disturbances. In addition to levels, production rate and % G in product (which must be controlled), they propose to control reactor temperature, reactor pressure, recycle flow rate, compressor power, concentration of B in purge, and concentration of E in product flow.

Lyman, Georgakis and Price's solution

Georgakis and coworkers have studied the problem in several papers (Lyman and Georgakis, 1995), (Price et al., 1994). They start by identifying the primary path, which is from the raw materials, through the reactor, condenser, the stripper, and to the product flow. They do not consider the C-feed since it is in excess in the recycle. (Price et al., 1994) list all candidates for through-put manipulations along the primary path: The feed streams, flow of coolant to reactor condenser, the separator drum bottoms flows and final product flow. Of the feeds only D is considered. As noted by the authors one possible through-put manipulator is missing, the C-feed since it was assumed not to be on the primary path. Next, they list the inventories that need to be controlled; pressure, reactor level, separator level and stripper level. Inventory controls are chosen so to construct a self-consistent path (which does not depend on quality controllers). At this point they have four different structures. In the end they suggest to control reactor temperature, reactor level, recycle flow rate, agitation rate, composition of A, D and E in reactor feed, composition of B (inert) in purge and composition of E in product. Even though they consider the operation cost for the control structure, it can never become economically optimal since variables that should be kept at their constraints (like the recycle valve) are used in control loops.

Their procedure is simple and clear to follow. The result is a control system that is fairly simple to understand.

Ricker's solution and related work

Ricker and Lee (1995) use nonlinear model predictive control (NMPC), and compare with the multiloop (decentralized) strategy of McAvoy and Ye (1994) which they find performs adequately for many scenarios, but they suggest that compressor power should not be controlled. For these simpler cases the NMPC strategy improves performance, but the difference may be too small to justify the NMPC design effort. On the other hand, for the more difficult cases, the decentralized approach would require multiple overrides to handle all conditions, and nonlinear model predictive control may be preferred.

In another study, Ricker (1996) considers decentralized control and concludes that there is little, if any, advantage to the use of NMPC on this application. He focuses on the selection of controlled variables. First, he suggests to control variables which optimally should be at their constraints. Second, he excludes variables for which the economic optimal value varies a lot. This is in agreement with the the idea of self-optimizing control. He ends up controlling recycle valve position (at minimum), steam valve position (at minimum), reactor level (at minimum), reactor temperature, composition of $A+C$ in reactor feed, and composition of $A$ in the reactor feed. He notes that it is important to determine appropriate setpoint values for the latter three controlled variables. In addition, overrides are installed. The production rate manipulator is chosen as the input that most likely is going to saturate; namely a combination of D and E.

Larsson and Skogestad (2000) follow up the work of Ricker (1995, 1996) on selecting controlled variables based on steady-state economics. A degree of freedom analysis reveals that there are 8 degrees of freedom at steady-state. In the nominal case (mode 1), 5 constraints are active at the optimum (Ricker, 1995), which leaves 3 unconstrained degrees of freedom. They systematically go through most of the alternative controlled variables. They find that good self-optimizing properties are achieved when controlling, in addition to the optimally constrained variables, reactor temperature, recycle flowrate (or compressor work), and composition of A in purge (or in reactor feed). They also find that the suggestion Ricker (1996) of controlling reactor temperature, A in reactor feed, and C in reactor feed, is among the better choices from a self-optimizing point of view. Larsson and Skogestad (2000) conclude that inert (B) composition should not be controlled, which is against the recommendations of most other authors except Ricker (1996). For the case they study, with a given production rate, they also find that reactant feed rates, purge rate or reactor feedrate should not be selected as controlled variables.

Luyben and Tyreus' solutions

Luyben et al. (1997) look at two cases for control of through-put; with the product flow or with the A-feed. Here we only consider the case where the product flow is the through-put manipulator. In step 3 they look at energy inventory control, which in this case is to control the reactor temperature with the reactor cooling water. In step 5 they assign the stripper steam stream to control stripper temperature, and therefore also the product compositions. Since the pressure of the reactor must be kept within bounds, they use the largest gas feed (the feed of C) to control the reactor pressure. Step 7 is the check of component balances, which gives a composition controller for inert using the purge flow and a composition controller for A using the A-feed. After doing some simulations they add a controller for control of the condenser, using the reactor temperature. Their final scheme sets agitation rate and the recycle valve at their constraints (which is optimal from an economic point of view), and controls reactor pressure, reactor level, separator temperature, stripper temperature, flow and the ratio between E and D feed, composition of $A$ in purge, and composition of $B$ (inert) in purge.

The resulting control system is simple, but there could have been a better justification on what outputs to control.

Tyreus (1999a) uses a thermodynamic approach to solve the problem. He (correctly) sets the agitation on full speed and closes the steam and recycle valves. In addition, he controls reactor temperature, reactor pressure, reactor level, A in reactor feed and B in purge flow.

Ng and Stephanopulos's solution

Ng and Stephanopoulos (1998) start by stabilizing the reactor. Then they proceed to look at the input/output level of the plant, where the central point is to fulfill material and energy balances. At this level it should have been possible to say something about how the feeds should be adjusted in order to achieve the right mix of G and H, but they do not. Rather, they look at which feed or exit flows that should be used to maintain material balance control. At the final level they translate the control objectives to measurements. Here material balance control is translated into inventory controllers, like product flow to control stripper level and bottom flow to control separator level. The next objective is then reactor pressure where purge rate is assigned. Finally E feedrate is assigned to control the product ratio, and E is assigned to through-put control. The A and C feedrates are used for controlling composition of A and C. In summary, the propose to control reactor temperature, reactor level, reactor pressure, G in product flow, stripper temperature, C in reactor feed, A in reactor feed and B (inert) in purge flow.

The method is somewhat difficult to follow and they seem to repeat many of the arguments in each phase.

Other work

The above review is not complete, and there are many authors who have worked on this problem, e.g. (Banerjee and Arkun, 1995), (Wu and Yu, 1997), (Scali and Cortonesi, 1995).

Other test problems

There are several other suitable test problem for studying issues related to plantwide control. These include the HDA-plant (Douglas, 1988), the vinyl acetate monomer process (Luyben and Tyreus, 1998), the recycle plant (Wu and Yu, 1996) and the Luyben and Luyben plant (Luyben and Luyben, 1995).

A new plantwide control design procedure

Based on the above review and as a conclusion to this paper, we propose a plantwide design procedure as summarized in Table 1. The procedure mainly follows the mathematically oriented approach, but with some elements from the process oriented approach.

We propose to first perform a top-down analysis to select controlled variables, based on the ideas of self-optimizing control (step 1). For this we need a steady state model and operational objectives (steady state economics).The result is one or more alternative sets for (primary) controlled variables ($y_1=c)$. The optimal production rate manipulator will usually follow from this analysis, but a more detailed analysis of this choice is recommended (step 2). Note that the selection of controlled variables is also important also when using multivariable control (e.g. MPC) in the lower control layers.

The top-down analysis is followed by a bottom-up assignment and possibly design of the control loops. This is done in a sequential manner and results in a hierarchical control system as shown in Figure [*]. Each controller should be of limited size (usually with as few inputs and outputs as possible), and with emphasis on avoiding ``long'' loops, that is, one should pair inputs and outputs with are ``close'' to another. Note that no degrees of freedom are lost as we close loops, as their setpoints are degrees of freedom for the higher layers.

The bottom-up design starts the regulatory control layer (step 3) where the main objective is faculitate manual operation when the more advanced control layers are not in use. We propose to start with stabilization (step 3a) (liquid level control, slowly drifting modes, etc.) where it is important to avoid input saturation. Next we consider the fast loops needed for local disturbance rejection (step 3b). Here we may make use of (extra) secondary measurements ($y_2$). This is the ``regulatory'' control layer (system). The objective for the regulatory layer is that manual operation of the plant is possible after these loops are closed.

We now have as degrees of freedom the setpoints of the regulatory layer ($r_2$) plus any unused manipulators ($u_1$), these should be used to control the primary outputs () (step 4). This control layer is here called the supervisory control layer, but other names are in use, such as advanced control and coordinating control. There are two main approaches here: Use of single loop (decentralized) controllers with possible feed-forward links (step 4a), or use of multivariable control (step 4b), e.g. decoupling or model predictive control (MPC). Multivariable control with constraint handling may avoid the need for logic to reconfigure loops, and properly designed multivariable controllers give better performance. These advantages must be traded against the cost of obtaining and maintaining the models used in the multivariable controller.

The main result of this will be the control structure, but control tunings may also be obtained. Iteration may be needed, for example one may need to go back an consider alternative controlled variables (step 1) or another throughput manipulator (step 2), if the resulting control problem in step 4 is too difficult. Finally, nonlinear dynamic simulations should be performed to validate the proposed control structure (step 6).


Table 1: A plantwide control design procedure
     
  Step Tools and comments
  Top-down analysis:  
1. CONTROLLED VARIABLES:
What is the control objective and which variables should be controlled?
Goal: Obtain primary controlled variables ()
Steady-state model, constraints and operational objectives
Degree of freedom analysis. Determine the major distrubances. Evaluate the (economic) loss, with constant setpoints for $c$ and look for ``self-optimizing'' control structure.
2. PRODUCTION RATE:
Where should the throughput be set?
Very important choice as it determines the structure of the remaining inventory control system.

The optimal economic choice may follow from step 1, but since this an important decision some more careful analysis of its implications should be made. The optimal choice may move when there are disturbances (to avoid logic reconfiguration one may consider MPC)

Bottom-up design:
(With given controlled and manipulated variables)
Controllability analysis: Compute zeros, poles, relative gain array, minimum singular value, etc.
     
3. REGULATORY CONTROL LAYER.  
  Main purpose of this layer: Enable manual operation of the plant  
  3.1 Stabilization.
Design of stabilizing loops (including slowly drifting modes). including choice of (extra) measurements and pairing with manipulated variables.

Pole vectors
Prefer for each unstable mode large elements in the corresponding pole vectors: Large noise tolerated (measurements) and small input magnitudes required (manipulators).
  3.2 Local disturbance rejection.
Assigment of local loops, often based on secondary (extra) measurements.
Partially controlled plant
Select secondary measurements ($y_2$) so that the effect of disturbances on the primary output () can be handled by the operators.
4. SUPERVISORY CONTROL LAYER.  
  Main purpose of this layer: Keep (primary) controlled variables ($c$) at optimal setpoints.  
  4a. Decentralized control.
Preferred if the process is noninteracting and the constraints are not changing.
Feed-forward control and ratio control may be useful here.
Controllability analysis for decentralized control
Pair on relative gain array close to identity matrix at crossover frequency, provided not negative at stead-state. Closed loop disturbance gain (CLDG) and performance gain array (PRGA) may be used to analyze interactions and tune controllers.
  4b. Multivariable control
Multivbariable coordination (including feed-forward control) is needed to improve control performance of interacting process, and for tracking of moving active constraints (MPC is well suited for the latter).
 
5. REAL TIME OPTIMIZATION  
  Compute optimal setpoints for controlled variables Steady-state model and operational objectives (see step 1)
6. VALIDATION Nonlinear dynamic simulation

Conclusion

In this paper we have given a review on plantwide control with emphasis on the following tasks that make up the control structure design problem:

  1. Selection of controlled variables ($c$ with setpoints $c_s$).
  2. Selection of manipulated variables ($m$).
  3. Selection of measurements ($v$)
  4. Selection of control configuration
  5. Selection of controller type

For the selection of controlled variables we have seen that the consideration of steady-state economics is very useful. It appears that the solution to this task provides the much needed link between steady-state optimization and process control, and that the idea of ``self-optimizing control'' to reduce the effect of disturbances and uncertainty provides a very useful framework for making the right decision. We thus propose that the design of the control system should start with the optimization (or at least identifying what the control objectives really are) and thus providing candidate sets for the controlled variables. The control problem is then defined, and one may proceed to analyze (e.g. using an input-output controllability analysis, whether the control objectives can be met). The actual bottom-up design of the control system is done after the control problem has been defined, including the classification of all variables (into inputs, disturbances, controlled variables, etc.).

Most of the proposed process oriented procedures have elements from this way of thinking, although some procedures focus mostly on control and operation and seem to skip lightly over the phase where the overall control problem is defined.

Several case studies have been proposed and many have worked on these. However, some of the works to provide limited general insight, and their value may therefore be questioned. A more systematic approach and a common ground of comparison is suggested for future work.

In summary, the field of plantwide control is still at a relatively early stage of its development. However, the progress over the last few years, both in terms of case studies and theoretical work, shows promise for the future. There is still a need for a clearer definition of the issues, and it is hoped that this paper may be useful in this respect. In the longer term, where automatic generation and analysis of alternative structures may become more routine, the main problem will probably be to be able to generate models in an efficient way, and to provide efficient means for their analysis (e.g. using input-output controllability analysis).

Bibliography

Alonso, A.A. and B.E. Ydstie (1996).
Process systems, passivity and the second law of thermodynamics. Computers & Chemical Engineering 20, S1119-S1124.

Arbel, A., I.H. Rinard and R. Shinnar (1996).
Dynamics and Control of Fluidized Catalytic Crackers. 3. Designing the Control System: Choice of Manipulated and Measured Variables for Partial Control. Ind. Eng. Chem. Res. 35, 2215-2233.

Arkun, Y. and G. Stephanopoulos (1980).
Studies in the synthesis of control structures for chemical processes: Part iv. design of steady-state optimizing control structures for chemical process units. AIChE Journal 26(6), 975-991.

Balchen, J.G. and K.I. Mummé (1988).
Process Control, Structures and applications. Van Nostrand Reinhold.

Banerjee, A. and Y. Arkun (1995).
Control Configuration Design Applied to the Tennessee Eastman Plant-Wide Control Problem. Computers. chem. Engng. 19(4), 453-480.

Buckley, P.S. (1964).
Techniques of Process control. Chap. 13.
John Wiley & Sons.

Cao, Y. (1995).
Control Structure Selection for Chemical Processes Using Input-Output Controllability Analysis. PhD thesis. University of Exeter.

Chang, J.W. and C.C. Yu (1990).
The relative gain for non-square multivariable systems. Chem. Eng. Sci. 45, 1309-1323.

Douglas, J.M. (1988).
Conceptual Design of Chemical Processes. McGraw-Hill.

Downs, J.J. and E.F. Vogel (1993).
A plant-wide industrial process control problem. Computers chem. Engng. 17, 245-255.

Findeisen, W, F.N. Bailey, M. Brdys, K. Malinowski, P. Tatjewski and A. Wozniak (1980).
Control and coordination in Hierarchical Systems. John Wiley & sons.

Fisher, W.R., M.F. Doherty and J.M. Douglas (1988).
The interface between design and control. 1, 2 and 3.; 1: Process controllability, 2: Process operability 3: Selecting a set of controlled variables.. Ind. Eng. Chem. Res. 27(4), 597-615.

Foss, C.S. (1973).
Critique of chemical process control theory. AIChE Journal 19(2), 209-214.

Georgakis, C. (1986).
On the use of extensive variables in process dynamics and control. Chemical Engineering Science 47(6), 1471-1484.

Häggblom, K.E. and K.V. Waller (1988).
Transformations and consistency relations of distillation control structures. AIChE J. 34(10), 1634-1648.

Hansen, J.E. (1998).
Plant wide dynamic simulation and control of chemical processes. PhD thesis. Danmarks Tekniske Universitet.

Havre, K. (1998).
Studies on controllability analysis and control structure design. PhD thesis. NTNU Trondheim. Aviable from http://www.chembio.ntnu.no/users/skoge/.

Havre, K. and S. Skogestad (1996a).
Input/output selection and partial control. Preprints IFAC '96, 13th World Congress of IFAC, San Francisco, CA, June 30-July 5, 1996, Volume M pp. 181-186.

Havre, K. and S. Skogestad (1996b).
Selection of variables for regulatory control using pole directions. 1996 AIChE Annual Meeting, Chicago, Illinois; Paper 45f.

Havre, K. and S. Skogestad (1998).
Selection of variables for regulatory control using pole vectors. DYCOPS-5, 5th IFAC Symposium on Dynamics and Control of Process Systems, Corfu, Greece, June 8-10, 1998 pp. 614-619.

Hougen, J. O. and N. F. Brockmeier (1969).
Developing Process Control Strategies - I: Eleven Basic Principles. Instrumentation Technology pp. 45-49.

Hovd, M. and S. Skogestad (1993).
Procedure for Regulatory Control Structure Selection with Application to the FCC Process. AIChE Journal 39(12), 1938-1953.

Larsson, T and S Skogestad (2000).
Self-optimizing control of a large-scale plant: The tenessee eastman process. Ind.Eng.Chem.Res (submitted).

Lee, J.H. and M. Morari (1991).
Robust measurements selection. Automatica 27(3), 519-527.

Lee, J.H., P. Kesavan and M. Morari (1997).
Control structure selection and robust control system design for a high-purity distillation column. IEEE Transactions on control systems technology pp. 402-416.

Lee, J.H., R.D. Braatz, M. Morari and A. Packard (1995).
Screening tools for robust control structure selection. Automatica 31(2), 229-235.

Ljung, L. (1987).
System Identification - Theory for the User. Prentice-Hall.

Luyben, M.L. and B.D. Tyreus (1998).
An industrial design/control study for the vinyl acetate monomer process. Computers. chem. Engng. 22(7-8), 876-877.

Luyben, M.L. and C.A. Floudas (1994).
Analyzing the interaction of design and control. 2. reactor separator recycle system. Computers & Chemical Engineering 18(10), 971-994.

Luyben, M.L. and W.L. Luyben (1995).
Design and control of a complex process involving two reaction steps, three distillation columns, and two recycle streams. Ind. Eng. Chem. Res. 34(11), 3885-3898.

Luyben, M.L., B.D. Tyreus and W.L. Luyben (1997).
Plantwide control design procedure. AICHE journal 43(12), 3161-3174.

Luyben, W.L. (1975).
Steady-state energy conservation aspects of distillation column control system design. Ind. Eng. Chem. Fundam. 14(4), 321-325.

Luyben, W.L. (1988).
The concept of eigenstructure in process control. Ind. Eng. Chem. Res. 27(4), 206-208.

Luyben, W.L. (1992).
Design and control of recycle processes in ternary systems with consecutive reactions. In: Interactions between process design and process control. IFAC Workshop. Pergamon Press. pp. 65-74.

Luyben, W.L. (1994).
Snowball effect in reactor/separator processes with recycle. Ind. Eng. Chem. Res. 33(2), 299-305.

Luyben, W.L. (1996).
Design and control degrees of freedom. Ind. Eng. Chem. Res. 35(7), 2204-2214.

Luyben, W.L., B.D. Tyreus and M. Luyben (1998).
Plantwide Process Control. McGraw-Hill.

Lyman, P.R. and C. Georgakis (1995).
Plant-wide control of the Tennessee Eastman Problem. Computers chem. Engng. 19(3), 321-331.

Maarleveld, A. and J.E. Rijnsdrop (1970).
Constraint control of distillation columns. Automatica 6, 51-58.

Manousiouthakis, V., R. Savage and Y. Arkun (1986).
Synthesis of decentralized process control structures. AIChE Journal 32(6), 991-1003.

Marlin, T.E. and A.N. Hrymak (1997).
Real-time operations optimization of continuous processes. In: Fifth international conference on chemical process control (CPC-5, Lake Tahoe, Jan. 1996). Vol. 93 of AIChE Symposium Series. pp. 156-164.

Martens, H. (1989).
Multivariate calibration. Wiley.

McAvoy, T.J. (1999).
Synthesis of plantwide control systems using optimization. Ind. Eng. Chem. Res. 38(8), 2984-2994.

McAvoy, T.J. and N. Ye (1994).
Base control for the Tennessee Eastman problem. Computers chem. Engng. 18(5), 383-413.

Mizoguchi, A., T.E. Marlin and A.N. Hrymak (1995).
Operations opimization and control design for a petroleum distillation process. Canadian J. of Chem. Engng. 73, 896-907.

Mizsey, P. and I. Kalmar (1996).
Effects of recycle on control of chemical processes. ESCAPE-6, 26-29 May 1996, Rhodes, Greece; Supplement to Computers & Chemical Engineering pp. S883-S888.

Morari, M. (1982).
Integrated plant control: A solution at hand or a research topic for the next decade?. In: CPC-II. pp. 467-495.

Morari, M., G. Stephanopoulos and Y. Arkun (1980).
Studies in the synthesis of control structures for chemical processes. Part I: Formulation of the problem. Process decomposition and the classification of the control task. Analysis of the optimizing control structures.. AIChE Journal 26(2), 220-232.

Narraway, L. and J. Perkins (1994).
Selection of process control structures based in economics. Computers chem. Engng 18(supplement), S511-S515.

Narraway, L.T. and J.D. Perkins (1993).
Selection of process control structure based on linear dynamic economics. Ind. Eng. Chem. Res. 32(11), 2681-2692.

Narraway, L.T., J.D. Perkins and G.W. Barton (1991).
Interaction between process design and process control: economic analysis of process dynamics. J. Proc. Cont. 1, 243-250.

Ng, C. and G. Stephanopoulos (1998a).
Plant-wide control structures and strategies. To be published in Process System Engineering Series of Academic press.

Ng, C. and G. Stephanopoulos (1998b).
Plant-Wide control structures and strategies. In: Preprints Dycops-5. IFAC. pp. 1-16.

Ogunnaike, B.A. (1995).
A contemporary industrial perspective on process control theory and practice. Dycord+ '95, 4th IFAC Symposium on Dynamics and Control of Chemical Reactors, Distillation Columns, and Batch Processes, Preprints, 7-9 June 1995 pp. 1-8.

Papadourakis, A., M.F. Doherty and J.M. Douglas (1987).
Relative gain array for units in plants with recycle. Ind. Eng. Chem. Res. 26(6), 1259-1262.

Ponton, J.W. (1994).
Degrees of freedom analysis in process control. Chemical Engineering Science 49(13), 2089-2095.

Ponton, J.W. and D.M. Laing (1993).
A hierarchical approach to the design of process control systems. Trans IChemE 71(Part A), 181-188.

Price, R.M., P.R. Lyman and C. Georgakis (1993).
Selection of throughput manipulators for plant-wide control structures. ECC '93 pp. 1060-1066.

Price, R.M., P.R. Lyman and C. Georgakis (1994).
Throughput manipulation in plantwide control structures. Ind. Eng. Chem. Res. 33(5), 1197-1207.

Ricker, N.L. (1995).
Optimal steady-state operation of the tennessee eastman challenge process. Computers chem. Engng 19(9), 949-959.

Ricker, N.L. (1996).
Decentralized control of the Tennessee Eastman Challenge Process. J. Proc. Cont. 6(4), 205-221.

Ricker, N.L. and J.H. Lee (1995).
Nonlinear model predictive control of the tennessee eastman challenge process. Computers chem. Engng. 19(9), 961-981.

Rijnsdorp, J.E. (1991).
Integrated Process Control and automation. Elsevier.

Rinard, I.H. and J.J. Downs (1992).
Plant wide control: A review and critique. AIChE Spring Meeting 1992, New Orleans, paper 67f.

Scali, C. and C. Cortonesi (1995).
Control of the tennessee eastman benchmark: Performance versus integrity tradeoff. Proc. of 3rd European Control Conference, Rome, Italy, 1995 pp. 3913-3918.

Seborg, D.E., T.F. Edgar and D.A. Mellichamp (1995).
Process dynamics and control. Wiley.

Shinnar, R. (1981).
Chemical reactor modelling for purposes of controller design. Cheng. Eng. Commun. 9, 73-99.

Shinskey, F.G. (1988).
Process Control Systems. McGraw-Hill.

Skogestad, S (2000).
Plantwide control: The search for the self-optimizing control structure. J. Proc. Control.

Skogestad, S. and E.A. Wolff (1992).
Controllability measures for disturbance rejection. IFAC Workshop, London, Sept. 7-8 1992 pp. 23-30.
Later printed in Modeling, Indetification and Control, 1996, pp 167-182 .

Skogestad, S. and I. Postlethwaite (1996).
Multivariable Feedback Control. John Wiley & Sons.

Skogestad, S. and M. Hovd (1995).
Letter to the editor on the decentralized versus multivariable control. J. Proc. Cont. 5(6), 499-400.

Stephanopoulos, G. (1982).
Synthesis of control systems for chemical plants - a challenge for creativity. Computers & Chemical Engineering 7(4), 331-365.

Tyreus, B.D. (1999a).
Dominant Variables for Partial Control. 2. Application to the Tennessee Eastman Challenge Process.. Ind. Eng. Chem. Res. pp. 1444-1455.

Tyreus, B.D. (1999b).
Dominant variables for partial control; Part 1 and 2. Ind. Eng. Chem. Res. 38(4), 1432-1455.

Umeda, T., T. Kuriyama and A. Ichikawa (1978).
A logical structure for process control system synthesis. Proc. IFAC Congress (Helsinki) 1978.

van de Wal, M. and B. de Jager (1995).
Control structure design: A survey. In: Proceedings of the American control conference. pp. 225-229.

Wolff, E. and S. Skogestad (1994).
Operability of integrated plants. PSE '94, Korea 30 May-3 June 1994 pp. 63-69.

Wolff, E.A., S. Skogestad and M. Hovd (1992).
Controllability of integrated plants. AIChE Spring National Meeting Paper 67a.

Wu, K.L. and C.-C. Yu (1996).
Reactor/separator process with recycle-1. candidate control structure for operability. Computers. chem. Engng. 20(11), 1291-1316.

Wu, K.L. and C.-C. Yu (1997).
Operability for processes with recycles: Interaction between design and operation with application to the Tennessee Eastman challenge process. Ind. Eng. Chem. Res. 36(6), 2239-2251.

Yi, C.K. and W.L. Luyben (1995).
Evaluation of plant-wide control structures by steady-state disturbance sensitivity analysis. Ind. Eng. Chem. Res. 34, 2393-2405.

Zheng, A. and R.V. Mahajannam (1999).
A qualitative controllability index and its applications. Ind. Eng. Chem. Res. pp. 999-1006.

Zheng, A., R.V. Mahajannam and J.M. Douglas (1999).
Hierarchical procedure for plantwide control system synthesis. AIChE Journal 45, 1255-1265.



Footnotes

... Larsson1
Presently at ABB Corporate Reasarch, Norway
... Skogestad2
Author to whom correspondence should be addressed. E-mail: skoge@chembio.ntnu.no; phone: +47-7359-4154; fax: +47-7359-4080
...)3
The assumption that $G_{22}^{-1}$ exists for all values of $s$ can be relaxed by replacing the inverse with the pseudo-inverse.


Sigurd Skogestad 2003-07-08