Methods for Solving Generalized Systems of Inequalities With Application to Nonlinear Programming
Burke, James Vincent
This item is only available for download by members of the University of Illinois community. Students, faculty, and staff at the U of I may log in with your NetID and password to view the item. If you are trying to access an Illinois-restricted dissertation or thesis, you can request a copy through your library's Inter-Library Loan office or purchase a copy directly from ProQuest.
Permalink
https://hdl.handle.net/2142/71215
Description
Title
Methods for Solving Generalized Systems of Inequalities With Application to Nonlinear Programming
Author(s)
Burke, James Vincent
Issue Date
1983
Department of Study
Mathematics
Discipline
Mathematics
Degree Granting Institution
University of Illinois at Urbana-Champaign
Degree Name
Ph.D.
Degree Level
Dissertation
Keyword(s)
Mathematics
Abstract
In this thesis techniques for solving generalized systems of inequalities are developed. That is, given two real normed linear spaces X and Y, and a mapping g: X (--->) Y, iterative schemes are developed for the solution of the following problem: Find x (ELEM) X such that 0 (ELEM) g(x) + K, where K is a closed convex cone in Y. Such an x is called a solution to the generalized system of inequalities, g(x) (LESSTHEQ)(,K) 0, where the relation "(LESSTHEQ)(,K)" represents the usual partial order induced on Y by K. A wide variety of problems in optimization theory can be cast in this framework, e.g. solving systems of equations and inequalities, solving general nonlinear complementarity problems, and finding Karush-Kuhn-Tucker points for mathematical programs.
Utilizing F. Clarke's recently developed theory of generalized sub-gradients, search directions are defined for the penalty function (mu)(x): = dist(g(x) (VBAR)-K) that are natural extensions of the steepest descent and Newton directions. These directions are then used in conjunction with Armijo type step length strategies to produce algorithms which are shown to be globally convergent, and, under standard assumptions, locally quadratically convergent. The abstract theory is then applied to several specific settings making use of both smooth and polyhedral norms, e.g. the 1, 2, and (INFIN)-norms. The numerical intricacies of each of these applications are discussed and concrete algorithms for machine implementation are indicated.
Finally, these results are applied toward the development of a robust sequential quadratic programming algorithm for general nonlinear programs. Here robustness means that the difficulties arising from the possible introduction of infeasible quadratic subproblems are circumvented. This approach, under the usual assumptions, retains local quadratic convergence properties. Moreover, the method is shown to be amenable to the watchdog extension, utilizing various watchdog functions.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.