Contrasting with adversarial examples improves self-supervised representation learning
Yuan, Meilu
This item's files can only be accessed by the System Administrators group.
Permalink
https://hdl.handle.net/2142/115961
Description
Title
Contrasting with adversarial examples improves self-supervised representation learning
Author(s)
Yuan, Meilu
Issue Date
2022-07-21
Director of Research (if dissertation) or Advisor (if thesis)
Koyejo, Oluwasanmi
Department of Study
Computer Science
Discipline
Computer Science
Degree Granting Institution
University of Illinois at Urbana-Champaign
Degree Name
M.S.
Degree Level
Thesis
Keyword(s)
Self-Supervised Learning
Representation Learning
Abstract
Lately, self-supervised contrastive learning has enjoyed enormous attention because of its good performance. This set of methods target at learning a good representation for downstream tasks by contrasting objects in different views. Such that, the distance between similar data’s representations can be minimized while representations of dissimilar data will become apart. Existing works suggest combining different meaningful transformations randomly to form positive examples, like, cropping, rotation, color distortion, blurring etc. As a special view of the data, specifically designed adversarial examples can maximize the loss values. Thus, they misguide the final prediction towards the wrongest direction. In this work, we explore the impacts of combining adversarially distorted examples as positive examples in self-supervised representation learning. By having more challenging positive examples in the contrasting stage, in the forward pass, we generate adversarially distorted images, extract representations of each anchor-adversarial data pair and calculate contrastive loss in the latent space; in the backward pass, we force the representation vectors of these two positive examples fully correlated. As adversarial attacks improve the feature invariance of learned representation, experimental results show adversarial attacks can further improve downstream task performance, robustness as well as the generalization ability of the trained encoder.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.