Sequential algorithms in nonlinear programming
Abstract
In this thesis, a class of methods for solving the following
non-linear programming problem: minimize f(x) subject to x e S = {hi(x)=O,i=l,...ℓ;gj(x) ≥o, j=l,...,m} is investigated. The class of methods under investigation consists of
the ones which reduce the computational process to a sequence of unconstrained
minimization subproblems. These methods arc generally noted for
their simplicity and robustness. The aim is to analyse and extend existing
results on the structure and convergence of these methods, and, in certain
cases, to introduce modifications for improving the numerical performance
of existing algorithms. Because of the possible effects degeneracy and near-degeneracy
may have on the numerical performance of algorithms, this thesis is initially
concerned with the structural differences between degenerate and nondegenerate
problems. This leads naturally to a new interpretation of
degeneracy, some structural differences between degenerate and non-degenerate
problems, necessary and sufficient conditions for 'isolated' solutions, the
concept and properties of an 'extended' problem, and a sensitivity analysis
for degenerate problems. Barrier function methods, specifically first order (logarithmic
function) methods, are then investigated. It is shown that, if certain
regularity conditions arc satisfied, the succcssive estimates x(r) of
the solution x(0) lie on a smooth trajectory. If the non-degeneracy
assumption is not satisfied, then the convergence to x(0) is order r½
compared with order r for non-degeneracy. A measure of sensitivity is
introduced which becomes large when the non-degeneracy assumption is close to violation. It is shown that this sensitivity measure is related to
the growth of dⁱ(x)/drⁱ with respect to
i for fixed r. Using this information
the extrapolation procedure can be analysed and the number of useful
extrapolation steps can be predicted. A source for a number of shortcomings
in the classical algorithm is identified, and a modification is
proposed. Properties and convergence of the modified algorithm are analysed.
The numerical performance of the classical and the modified algorithms is
compared for a range of test problems. Penalty function and multiplier methods, specifically the
quadratic penalty function method and the corresponding augmented
Lagrangian methods, arc treated next. Order of convergence results for
the quadratic loss penalty function and the augmented Lagrangian methods
arc extended to include problems with degenerate inequality constraints.
For an equality constrained problem, it is known that an augmented Lagrangian
method is just an unconstrained maximization of the augmented dual function,
Dσ(v) = min (x) {f(x)-Vᵀ·h + σ/2 hᵀ·h} , with respect to v for σ large enough.
It is now shown that the Newton iteration for v based on maximizing Dσ(v)
can be decomposed into a Powell/Hestenes iteration followed by a Newtonlike
correction to v. Utilizing the decomposition result, two acceleration
techniques arc proposed for improving the convergence of the Powell/Hestenes
multiplier method. The first of these two techniques is a simple device
superimposed on the original Powell/Hestenes iteration to make use of
information from the previous iteration. For problems with only one constraint,
this acceleration technique is equivalent to replacing the second
(Newton-like) part of the decomposition by a finite difference approximation.
The other acceleration technique is just a quasi-Newton technique.
Numerical results using various updating schemes for v are reported and
discussed. Finally, possible uses for the methods developed in this thesis arc discussed including constrained optimization without derivatives and
optimal control problems with terminal constraints.
Description
Keywords
Citation
Collections
Source
Type
Book Title
Entity type
Access Statement
License Rights
Restricted until
Downloads
File
Description