Theory and Algorithms for Nonlinear Optimization and Variational Inequalities
Chu, Liang-Ju
This item is only available for download by members of the University of Illinois community. Students, faculty, and staff at the U of I may log in with your NetID and password to view the item. If you are trying to access an Illinois-restricted dissertation or thesis, you can request a copy through your library's Inter-Library Loan office or purchase a copy directly from ProQuest.
Permalink
https://hdl.handle.net/2142/72533
Description
Title
Theory and Algorithms for Nonlinear Optimization and Variational Inequalities
Author(s)
Chu, Liang-Ju
Issue Date
1992
Doctoral Committee Chair(s)
McLinden, L.,
Department of Study
Mathematics
Discipline
Mathematics
Degree Granting Institution
University of Illinois at Urbana-Champaign
Degree Name
Ph.D.
Degree Level
Dissertation
Keyword(s)
Mathematics
Abstract
We consider three separate topics in nonlinear optimization, one theoretical topic and two algorithmic topics. Each of these topics deals with a broad class of nonlinear optimization problems. We first introduce and analyze a generalized parametric variational inequality problem $PVI(E, T, C, \psi$, Z) in locally convex Hausdorff topological vector spaces. Based on Nikaido's coincidence theorem, we establish several general existence theorems, even without requiring convexity nor contractibility on T and C, but merely a certain acyclic property. The case where C is not compact is considered. We also analyze asymptotic convergence of the partial proximal point algorithm for solving the generalized nonlinear equation $0\in T(x),$ where $T : H\to H$ is a maximal monotone multifunction, and $H := H\sb1 \times H\sb2$ is a product of two real Hilbert spaces. Under the mild feasibility assumption $O\in int(coR(T)),$ we show that the partial proximal point algorithm has the same convergence properties as does Rockafellar's proximal point algorithm. Moreover, the partial convergence rates are shown to depend upon how rapidly $T\sp{-1}$ grows away from the solution set $T\sb{-1}(0).$ The partial rate of convergence is linear, superlinear, or quadratic, depending upon certain parameters. Finally, we describe a primal-dual path-following interior point algorithm for solving the pair of primal-dual problems (P) and (D):
(P) sk20inf$\sb{x}\{f(x); Ax=b, x\ge 0\}$
(D) sk20sup$\sb{x,s}\{g(x,s); Ax=b, x\ge 0$ and $s=\nabla f(x) - A\sp{T}y\ge 0$ for some $y\in R\sp{m}\}.$
Here, f is a continuously differentiable convex function in $R\sp{n},$ and g is defined by $g(x, s) := f(x) - x\sp{T}s.$ Under a kind of strict feasibility assumption, we show that the algorithm under modification requires a total of $O(\sqrt{n\ell})$ number of iterations, with total arithmetic operations of order $O(n\sp3\ell),$ where $\ell$ is the initial input size. As an application to usual linear or convex quadratic programming, this algorithm solves the pair (P) and (D) in at most $O(\sqrt{nL})$ iterations, with total arithmetic operations of order $O(n\sp3 L),$ where L is the input size. Moreover, we show the duality gap sequence goes to zero linearly, superlinearly, or quadratically, depending upon a certain parameter $\theta.$ Also, we show that any limit point of the induced sequence ($x\sp{k},s\sp{k})$ is a maximal complementary solution of a certain monotone multifunction.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.