Adversarial Machine learning is a field of research lying at the intersection of Machine Learning and Security, which studies vulnerabilities of Machine learning models that make them susceptible to attacks. The attacks are inflicted by carefully designing a perturbed input which appears benign, but fools the models to perform in unexpected ways.
To date, most work in adversarial attacks and defenses has been done for classification models. However, generative models are susceptible to attacks as well, and thus warrant attention. We study some attacks for generative models like Autoencoders and Variational Autoencoders. We discuss the relative effectiveness of the attack methods, and explore some simple defense methods against the attacks.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.