Withdraw
Loading…
Knowledge transfer in vision tasks with incomplete data
Li, Zhizhong
Loading…
Permalink
https://hdl.handle.net/2142/107956
Description
- Title
- Knowledge transfer in vision tasks with incomplete data
- Author(s)
- Li, Zhizhong
- Issue Date
- 2020-05-04
- Director of Research (if dissertation) or Advisor (if thesis)
- Hoiem, Derek
- Doctoral Committee Chair(s)
- Hoiem, Derek
- Committee Member(s)
- Lazebnik, Svetlana
- Schwing, Alexander G
- Luo, Linjie
- Department of Study
- Computer Science
- Discipline
- Computer Science
- Degree Granting Institution
- University of Illinois at Urbana-Champaign
- Degree Name
- Ph.D.
- Degree Level
- Dissertation
- Keyword(s)
- Knowledge transfer
- incomplete data
- transfer learning
- continual learning
- deep learning
- computer vision
- Abstract
- In many machine learning applications, some assumptions are so prevalent as to be left unwritten: all necessary data are available throughout the training process, the training and test data are independent and identically distributed (i.i.d.), and the dataset sampling sufficiently represent the test data of the model's usage scenario. Transfer learning methods can help when some of these assumptions are broken in real life, but still often assume all-time availability of data that the old and new knowledge can be learned from. In practice, necessary data or aspects of them can become inaccessible due to incomplete knowledge of test scenarios, privacy or legal concerns, protection of business leverage, evolving goals, etc. In this thesis, we address three transfer learning scenarios in neural networks that regularly occur in practice but differ from both standard i.i.d. assumptions and common transfer learning data availability assumptions. First, when transferring knowledge from previous tasks but the data used for training them is no longer available, we propose a method to extend and fine-tune the neural network to incorporate new classifiers while retaining the performance of existing classifiers. Second, with unsupervised domain adaptation where the target domain annotations are unavailable, we propose a method to more effectively transfer models to the unsupervised target domain, but guiding it using a common auxiliary task whose ground truth can be obtained for free or is already annotated. Finally, we show that, when test data is not i.i.d. with training data, classifiers are prone to confident but wrong predictions. In practical scenarios where the test data distribution is unknown before deploying the model, we explore ideas in several research fields to reduce confident errors. We observe that calibrated ensembles are the most effective, followed by single models calibrated using temperature scaling.
- Graduation Semester
- 2020-05
- Type of Resource
- Thesis
- Permalink
- http://hdl.handle.net/2142/107956
- Copyright and License Information
- (c) 2020 Zhizhong Li
Owning Collections
Graduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisDissertations and Theses - Computer Science
Dissertations and Theses from the Dept. of Computer ScienceManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…