Withdraw
Loading…
Contrasting with adversarial examples improves self-supervised representation learning
Yuan, Meilu
Loading…
Permalink
https://hdl.handle.net/2142/115961
Description
- Title
- Contrasting with adversarial examples improves self-supervised representation learning
- Author(s)
- Yuan, Meilu
- Issue Date
- 2022-07-21
- Director of Research (if dissertation) or Advisor (if thesis)
- Koyejo, Oluwasanmi
- Department of Study
- Computer Science
- Discipline
- Computer Science
- Degree Granting Institution
- University of Illinois at Urbana-Champaign
- Degree Name
- M.S.
- Degree Level
- Thesis
- Keyword(s)
- Self-Supervised Learning
- Representation Learning
- Abstract
- Lately, self-supervised contrastive learning has enjoyed enormous attention because of its good performance. This set of methods target at learning a good representation for downstream tasks by contrasting objects in different views. Such that, the distance between similar data’s representations can be minimized while representations of dissimilar data will become apart. Existing works suggest combining different meaningful transformations randomly to form positive examples, like, cropping, rotation, color distortion, blurring etc. As a special view of the data, specifically designed adversarial examples can maximize the loss values. Thus, they misguide the final prediction towards the wrongest direction. In this work, we explore the impacts of combining adversarially distorted examples as positive examples in self-supervised representation learning. By having more challenging positive examples in the contrasting stage, in the forward pass, we generate adversarially distorted images, extract representations of each anchor-adversarial data pair and calculate contrastive loss in the latent space; in the backward pass, we force the representation vectors of these two positive examples fully correlated. As adversarial attacks improve the feature invariance of learned representation, experimental results show adversarial attacks can further improve downstream task performance, robustness as well as the generalization ability of the trained encoder.
- Graduation Semester
- 2022-08
- Type of Resource
- Thesis
- Copyright and License Information
- Copyright 2022 Meilu Yuan
Owning Collections
Graduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…