powered by:
MagicWare, s.r.o.

Approximate Dynamic Programming Strategy for Dual Adaptive Control

Authors:Lee Jong Min, Georgia Institute of Technology, United States
Lee Jay H., Georgia Institute of Technology, United States
Topic:2.4 Optimal Control
Session:Specific Problems in Optimal Control
Keywords: approximate dynamic programming, dual control, adaptive control, stochastic dynamic programming

Abstract

An approximate dynamic programming (ADP) strategy for a dual control problem is presented. An optimal control policy of a dual control problem is only derived by solving a stochastic dynamic programming, which is analytically and computationally intractable using conventional solution methods that involve sampling of a complete hyperstate space. To solve the dynamic programming in a computationally amenable manner, we perform closed-loop simulations with different control policies to generate a data set that defines a relevant subset of a hyperstate and then solve the Bellman equation only on the collected data points using value iteration. A local approximator with a penalty function is designed to give reliable estimation of cost-to-go values over the continuous hyperstate space. An integrating process with an unknown gain is used for illustration.