Withdraw
Loading…
Generative modeling of sequential data
Subakan, Y. Cem
Loading…
Permalink
https://hdl.handle.net/2142/100972
Description
- Title
- Generative modeling of sequential data
- Author(s)
- Subakan, Y. Cem
- Issue Date
- 2018-04-13
- Director of Research (if dissertation) or Advisor (if thesis)
- Smararagdis, Paris
- Doctoral Committee Chair(s)
- Smararagdis, Paris
- Committee Member(s)
- Forsyth, David
- Hasegawa-Johnson, Mark
- Saatci, Yunus
- Department of Study
- Computer Science
- Discipline
- Computer Science
- Degree Granting Institution
- University of Illinois at Urbana-Champaign
- Degree Name
- Ph.D.
- Degree Level
- Dissertation
- Keyword(s)
- Generative Modeling, Sequential Modeling, Generative Adversarial Networks, Probabilistic Modeling, Method of Moments
- Abstract
- In this thesis, we investigate various approaches for generative modeling, with a special emphasis on sequential data. Namely, we develop methodologies to deal with issues regarding representation (modeling choices), learning paradigm (e.g. maximum likelihood, method of moments, adversarial training), and optimization. For the representation aspect, we make the following contributions: -We argue that using a multi-modal latent representation (unlike popular methods such as variational autoencoders or generative adversarial networks) significantly enhances the generative model learning performance, as evidenced by the experiments we conduct on handwritten digit dataset (MNIST) and celebrity faces dataset (CELEB-A). -We prove that the standard factorial Hidden Markov model defined in the literature is not statistically identifiable. We propose two alternative identifiable models, and show their validity on unsupervised source separation examples. -We experimentally show that using a convolutional neural network architecture provides performance boost over time agnostic methods such as non-negative matrix factorization, and auto-encoders. -We experimentally show that using a recurrent neural network with a diagonal recurrent matrix increases the convergence speed and final accuracy of the model in most cases in a symbolic music modeling task. For the learning paradigm aspect, we make the following contributions: -We propose a method of moment based parameter learning framework for Hidden Markov Models (HMMs) with special transition structures such as mixture of HMMs, switching HMMs and HMMs with mixture emissions. -We propose a new generative model learning method which does approximate maximum likelihood parameter estimation for implicit generative models. -We argue that using an implicit generative model for audio source separation increases the performance over models which specify a cost function, such as NMF or autoencoders trained via maximum likelihood. We show performance improvement in speech mixtures created from the TIMIT dataset. For the optimization aspect, we make the following contributions: -We show that using the method of moment framework we propose in this thesis boosts the model performance when used as an initialization scheme for the expectation maximization algorithm. -We propose new optimization algorithms for identifiable alternatives to Factorial HMM. -We propose a two-step optimization algorithm for learning implicit generative models which efficiently learns multi-modal latent representations.
- Graduation Semester
- 2018-05
- Type of Resource
- text
- Permalink
- http://hdl.handle.net/2142/100972
- Copyright and License Information
- Copyright 2018 Y. Cem Subakan
Owning Collections
Graduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisDissertations and Theses - Computer Science
Dissertations and Theses from the Dept. of Computer ScienceManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…