15a Tight Convex Underestimators for Arbitrary C2-Continuous Functions

Chrysanthos E. Gounaris and Christodoulos A. Floudas. Chemical Engineering, Princeton University, Princeton, NJ 08544

Convex underestimation of nonconvex functions is of fundamental importance for Deterministic Global Optimization algorithms that utilize a Branch and Bound scheme.  The tightness of the underestimators to the original function is directly linked with the computational efficiency of the optimization methods.  For the case of general nonconvex functions, that do not exhibit some special structure that could possibly be exploited, one can use the convex underestimators based on the αBB theory [1,2,3].  In order for these underestimators to be constructed, one needs to have information on the bounds of the elements of the function's Hessian matrix, which could be obtained by performing interval arithmetic calculations.  The method would benefit from shrinkage of the domain under consideration and this was firstly exploited in the p-αBB method [4], where a piecewise approach was utilized.  The method proposed partitioning of the domain into many subdomains and construction of the corresponding αBB underestimator for each one.  These underestimators, although not valid for the entire domain, are much tighter to the original function in their respective subdomains.  A hyperplane is subsequently added to each one of these underestimators and is selected in such a way, so that the overall underestimator produced is continuous and smooth (C1-continuity).

In our proposed approach we construct these αBB underestimators, but instead of adding hyperplanes we identify those tangential line segments that, in combination with convex parts of the original underestimators, constitute a C1-continuous convex underestimator that is valid for the overall domain under consideration.  One can also consider only the lines defined by the linear segments, thus coming up with a piecewise linear underestimator, which can easily be incorporated in the NLP relaxation as a set of linear constraints.

This methodology, which is directly applicable to univariate problems, can be extended to multidimensional problems through proper projections of the multivariate αBB underestimators into one-dimensional spaces. Furthermore, since our method utilizes projections into lower-dimensional spaces, we explore ways to recover some of the information lost in this process. In particular, we apply our method after having transformed the problem in an orthonormal fashion. This leads to the construction of even tighter underestimators, through the accumulation of additional valid linear cuts in the relaxation. Improvements are observed both in the quality of lower bound obtained, as well as in the tightness of the underestimator across the whole domain under consideration [5].

The method has been tested on large collections of univariate and multivariate test problems. In most cases, the lower bounds obtained are close to the global minimum thus eliminating the need for any branching at all.

 

[1]     C. S. Adjiman, S. Dallwig, C. A. Floudas, and A. Neumaier, Computers & Chemical Engineering 22, 9 (1998).

[2]     C. S. Adjiman, I. P. Androulakis, and C. A. Floudas, Computers & Chemical Engineering 22, 9 (1998).

[3]     C. A. Floudas, Deterministic Global Optimization, Kluwer Academic Publishers (2000).

[4]     C. A. Meyer and C. A. Floudas, Journal of Global Optimization, 32, 2 (2005).

[5]     C. E. Gounaris and C. A. Floudas, paper in preparation. (2006).