Approximate Dynamic Programming Approach to Process Control

Jay Lee and Wee Chin Wong
Georgia Tech


Abstract

In this talk, we assess the potentials of the approximate dynamic programming (ADP) approach for process control, especially as a method to complement the model predictive control (MPC) approach. In the Artificial Intelligence (AI) and Operations Research (OR) research communities, ADP has recently seen significant activities as an effective method for solving Markov Decision Process (MDP), which represents a type of multi-stage decision problems under uncertainty. Process control problems are similar to MDPs with the key difference being the continuous state and action spaces as opposed to discrete ones. In addition, unlike in other popular ADP application areas like robotics or games, in process control applications first and foremost concern should be on the safety and economics of the on-going operation rather than on efficient learning. We explore different options within ADP design, such as the pre-decision state vs. post-decision state value function, parametric vs. nonparametric value function approximator, batch-mode vs. continuous-mode learning, exploration vs. robustness, etc. We argue that ADP possesses great potentials, especially for obtaining effective control policies for stochastic constrained nonlinear or linear systems and continually improving them towards optimality.