Withdraw
Loading…
Task-specific and interpretable feature learning
Wang, Zhangyang
Loading…
Permalink
https://hdl.handle.net/2142/95560
Description
- Title
- Task-specific and interpretable feature learning
- Author(s)
- Wang, Zhangyang
- Issue Date
- 2016-10-28
- Director of Research (if dissertation) or Advisor (if thesis)
- Huang, Thomas
- Doctoral Committee Chair(s)
- Huang, Thomas
- Committee Member(s)
- Hasegawa-Johnson, Mark
- Liang, Zhi-Pei
- Dolcos, Florin
- Yang, Jianchao
- Department of Study
- Electrical & Computer Eng
- Discipline
- Electrical & Computer Engr
- Degree Granting Institution
- University of Illinois at Urbana-Champaign
- Degree Name
- Ph.D.
- Degree Level
- Dissertation
- Keyword(s)
- deep learning
- sparse representation
- Abstract
- Deep learning models have had tremendous impacts in recent years, while a question has been raised by many: Is deep learning just a triumph of empiricism? There has been emerging interest in reducing the gap between the theoretical soundness and interpretability, and the empirical success of deep models. This dissertation provides a comprehensive discussion on bridging traditional model-based learning approaches that emphasize problem-specific reasoning, and deep models that allow for larger learning capacity. The overall goal is to devise the next-generation feature learning architectures that are: 1) task-specific, namely, optimizing the entire pipeline from end to end while taking advantage of available prior knowledge and domain expertise; and 2) interpretable, namely, being able to learn a representation consisting of semantically sensible variables, and to display predictable behaviors. This dissertation starts by showing how the classical sparse coding models could be improved in a task-specific way, by formulating the entire pipeline as bi-level optimization. Then, it mainly illustrates how to incorporate the structure of classical learning models, e.g., sparse coding, into the design of deep architectures. A few concrete model examples are presented, ranging from the $\ell_0$ and $\ell_1$ sparse approximation models, to the $\ell_\infty$ constrained model and the dual-sparsity model. The analytic tools in the optimization problems can be translated to guide the architecture design and performance analysis of deep models. As a result, those customized deep models demonstrate improved performance, intuitive interpretation, and efficient parameter initialization. On the other hand, deep networks are shown to be analogous to brain mechanisms. They exhibit the ability to describe semantic content from the primitive level to the abstract level. This dissertation thus also presents a preliminary investigation of the synergy between feature learning with cognitive science and neuroscience. Two novel application domains, image aesthetics assessment and brain encoding, are explored, with promising preliminary results achieved.
- Graduation Semester
- 2016-12
- Type of Resource
- text
- Permalink
- http://hdl.handle.net/2142/95560
- Copyright and License Information
- Copyright 2016 Zhangyang Wang
Owning Collections
Dissertations and Theses - Electrical and Computer Engineering
Dissertations and Theses in Electrical and Computer EngineeringGraduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…