Report from short-course by Boyd on convex optimization

Sigurd Skogestad ((no email))
Thu, 28 Sep 95 13:46:21 +0100

Hallo PROSTere,

This is a report from a three-day course in Linkoping by
Steven Boyd, mostly on convex optimization applied to control
problems.

A copy of Steven Boyd's course notes can be found in the prosessregulering
group room. Morten Hovd think the course notes are very good
(although hardly any replacement for a textbook).

>
> Minutes from LMI-course in Linkoping 23 - 25 of august, 1995.
> -------------------------------------------------------------
>
> Some of you may be aware of that Morten Hovd and I (Kjetil Havre, KH) went to
> Linkoping to take I course in Linear Matrix Inequalities LMI by Stephen Boyd.
> Den tekniska hogskulen i Linkoping usually arrange one non-profit course
> every august. I think that everyone who are interested are welcome.
>
> Stephen Boyd is well known in control engineering academia for his work on
> optimization, LMI applied to control theory and the fact that he got the
> Eckman award for young promising research scientists (for
> scientists below 35 years).
>
> The lecture notes were taken from a course on Convex Optimization at
> Stanford University, electrical engineering department. So the main focus was
> on convex optimization and not on LMI with application to control engineering.
>
> An outline of his talk the first day was:
> 1) Convex optimization,
> definition of convex sets, functions and optimization problems.
> 2) Examples,
> he gave examples on how he could form the identification problem
> of a Finite Impulse Response (FIR) model as a convex optimization problem
> and how he could use the set of impulse response models to calculate
> a robust and optimal control law based on the identified models in a
> feed-forward scheme.
> 3) Ellipsoid algorithm
> Boyd explained the details of the ellipsoid algorithm without
> constraints,
> and he extended the algorithm to include constraints.
> He also explained how to choose stopping criteria based on a lower bound
> on the objective function. So, in convex optimization the stopping
> criteria is much simpler than in conventional optimization theory where
> stopping criteria is based on the Kuhn-Tucker conditions.
> This simplification is a great advantage over the more traditional and
> general optimization theory.
> 4) At the end of the day he outlined the basic idea of the more efficient
> interior point methods.
>
> At the evening the first day all of the participants (about 45) were invited
> to prof. Lennart Ljung on beer, wine, cheese, snacks, coffee and cookies.
> This gave us the possibility to get better known with Ph. D. students
> at Den tekniska hogskulen in Linkoping and KTH in Stockholm.
>
> The outline for the second day was:
> 1) Review and finish some points about the ellipsoid algorithm.
> 2) Examples, with application to
> Linear Programming LP,
> Sequential Quadratic Programming SQP,
> Quadratic Programming QP,
> Semidefinite Programming SDP, where the constraints are LMI's.
> 3) Duality.
>
> The last day was a bit shorter than the others. His talk was mainly on
> 1) Duality.
> 2) Interior point methods.
>
> Duality in the sense which Boyd talked about it is:
> - to find a lower bound on the objective over a feasible set that depends
> on some parameters, lambda.
> - find conditions on lambda under which bound is nontrivial (>-\infty).
> - maximize lower bound over parameters lambda.
>
> One fairly easy way to construct a dual problem is to construct a Lagrangian.
> Boyd gave a nice geometrical interpretation of why optimizing the Lagrangian
> yield minimum of the objective.
> The dual problem can be seen to be a concave maximization problem. It is
> usually easy to show that the dual problem yields lower bound on objective
> function. However, it is hard to show that the objective function
> obtains minimum for the same values that the dual problem obtains maximum.
> Further it is difficult to show that values of the objective and the dual
> problem are equal.
>
> Interior point methods use barrier functions which replaces the constraints
> so that the modified problem becomes a smooth unconstrained problem.
> The early methods were developed in 1950s-1960s. They worked well in practice,
> however there was no worst-case complexity theory.
> The methods fell out of favour in 1970s. New methods not much different
> form earlier, has been developed since 1984. The main difference is that one
> can show worst case complexity.
>
> In the heart of some of the methods lies newton methods for smooth convex
> optimization. Together with logarithmic barrier functions, this is reported to
> work well in practice. The method of central path, uses the Newton's method
> together with continuity and logarithmic barrier functions to calculate the
> path of centres. A new objective function is formed consisting of a parameter
> alpha multiplied by the main objective and adding the logarithmic barrier
> functions. By setting alpha equal to zero the centre of the constraints can
> be calculated, which is fairly easy. Then by increasing the parameter alpha
> one can obtain the path from the centre of the constraint to the optimal
> solution (alpha equal to infinity).
> One may claim that this problem becomes ill-conditioned for large values of
> alpha. However, with logarithmic barrier functions this is no problem.
>
> Another method called "Methods of centres" adds additional constraints and
> manipulates these constraints so that the feasible region becomes smaller and
> smaller for each iteration. It can be observed that the objective function
> becomes very steep when the feasible region becomes sufficiently small.
> So, one nice improvement is to combine these two methods.
>
> My (KH) general impression of the course is that it is important to recognise
> convex optimization problem, and it is important to try to form the
> optimization problem so that it becomes convex if possible. Boyd also
> argued (all the time) that convex optimization problems occurs frequently
> in engineering problems and particularly in control applications.
> One other point is that the stopping criteria becomes easy.
>
> It is also worth noticing the Professor Lennart Ljung at Tekniska hogskulen
> in Linkoping noted that this was the best of the traditional August course
> that they had ever arranged.
>
>
>
> Morten Hovd Kjetil Havre
>