Withdraw
Loading…
Stochastic approximation schemes for stochastic optimization and variational problems: adaptive steplengths, smoothing, and regularization
Yousefian, Seyed Farzad
Loading…
Permalink
https://hdl.handle.net/2142/45385
Description
- Title
- Stochastic approximation schemes for stochastic optimization and variational problems: adaptive steplengths, smoothing, and regularization
- Author(s)
- Yousefian, Seyed Farzad
- Issue Date
- 2013-08-22T16:38:34Z
- Director of Research (if dissertation) or Advisor (if thesis)
- Nedich, Angelia
- Shanbhag, Vinayak V.
- Doctoral Committee Chair(s)
- Nedich, Angelia
- Committee Member(s)
- Shanbhag, Vinayak V.
- Srikant, Rayadurgam
- Zhou, Enlu
- Department of Study
- Industrial&Enterprise Sys Eng
- Discipline
- Industrial Engineering
- Degree Granting Institution
- University of Illinois at Urbana-Champaign
- Degree Name
- Ph.D.
- Degree Level
- Dissertation
- Keyword(s)
- Stochastic approximation methods
- Stochastic optimization
- Stochastic variational inequalities
- Game theory
- Abstract
- Stochastic approximation (SA) methods, first proposed by Robbins and Monro in 1951 for root- finding problems, have been widely used in the literature to solve problems arising from stochastic convex optimization, stochastic Nash games and more recently stochastic variational inequalities. Several challenges arise in the development of SA schemes. First, little guidance is provided on the choice of the steplength sequence. Second, most variants of these schemes in optimization require differentiability of the objective function and Lipschitz continuity of the gradient. Finally, strong convexity of the objective function is another requirement that is a strong assumption to hold. Motivated by these challenges, this thesis focuses on studying research challenges related to the SA methods in three different areas: (i) steplengths, (ii) smoothing, and (iii) regularization. The first part of this thesis pertains to solving strongly convex differentiable stochastic optimization problems using SA methods. The performance of standard SA implementations can vary significantly based on the choice of the steplength sequence, and in general, little guidance is provided about good choices. Motivated by this gap, we present two adaptive steplength schemes equipped with convergence theory that aim to overcome some of the reliance on user-specefi c parameters. Of these, the first scheme, referred to as a recursive steplength stochastic approximation (RSA) scheme, minimizes the error bounds to derive a rule that expresses the steplength at a given iteration as a simple function of the steplength at the previous iteration and certain problem parameters. The second scheme, termed as a cascading steplength stochastic approximation (CSA) scheme, maintains the steplength sequence as a piecewise-constant decreasing function with the reduction in the steplength occurring when a suitable error threshold is met. We then allow for nondiff erentiable objectives but with bounded subgradients over a certain domain. In such a regime, we propose a local smoothing technique, based on random local perturbations of the objective function that leads to a differentiable approximation of the function and a Lipschitzian property for the gradient of the approximation. This facilitates the development of an adaptive steplength stochastic approximation framework, which now requires sampling in the product space of the original measure and the artifi cally introduced distribution. Motivated by problems arising in decentralized control problems and non-cooperative Nash games, in the second part of this thesis, we consider a class of strongly monotone Cartesian variational inequality problems, where the mappings either contain expectations or their evaluations are corrupted by error. Such complications are captured under the umbrella of Cartesian stochastic variational inequality (CSVI) problems and we consider solving such problems via SA schemes. Spece fically, along similar directions to the RSA scheme, a stepsize rule is constructed for strongly monotone stochastic variational inequality problems. The proposed scheme is seen to produce sequences that are guaranteed to converge almost surely to the unique solution of the problem. To cope with networked multi-agent generalizations, we provide requirements under which independently chosen steplength rules still possess desirable almost-sure convergence properties. To address non-smoothness, we consider a regime where Lipschitz constants on the map are either unavailable or di fficult to derive. Here, we generalize the aforementioned smoothing scheme for deriving an approximation of the original mapping, which is then shown to be Lipschitz continuous with a prescribed constant. Using this technique, we introduce a locally randomized SA algorithm and provide almost sure convergence theory for the resulting sequence of iterates to an approximate solution of the original CSVI problem. In the third part of this thesis, we consider a stochastic variational inequality (SVI) problem with a continuous and monotone mapping over a compact and convex set. Traditionally, stochastic approximation schemes for SVIs have relied on strong monotonicity and Lipschitzian properties of the underlying map. We present a regularized smoothed SA (RSSA) scheme wherein stepsize, smoothing, and regularization parameters are updated after every iteration. Under suitable assumptions on the sequences, we show that the algorithm generates iterates that converge to a solution the SVI problem in an almost-sure sense. Additionally, we provide rate estimates that relate iterates to their counterparts derived from the Tikhonov trajectory associated with a deterministic problem.
- Graduation Semester
- 2013-08
- Permalink
- http://hdl.handle.net/2142/45385
- Copyright and License Information
- Copyright 2013 Seyed Farzad Yousefian
Owning Collections
Graduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…