Withdraw
Loading…
Adversarial methods in machine learning - a federated defense and an attack
Shah, Devansh
Loading…
Permalink
https://hdl.handle.net/2142/110735
Description
- Title
- Adversarial methods in machine learning - a federated defense and an attack
- Author(s)
- Shah, Devansh
- Issue Date
- 2021-04-26
- Director of Research (if dissertation) or Advisor (if thesis)
- Li, Bo
- Department of Study
- Computer Science
- Discipline
- Computer Science
- Degree Granting Institution
- University of Illinois at Urbana-Champaign
- Degree Name
- M.S.
- Degree Level
- Thesis
- Keyword(s)
- Machine Learning
- Adversarial Learning
- Federated Learning
- Computer Vision
- Abstract
- Deep Neural networks have recently been shown to provide state-of-the-art results for several machine learning tasks, in computer vision and natural language processing applications. These developments make security aspects of machine learning increasingly important. Unfortunately, neural networks are vulnerable to adversarial examples — inputs that are almost indistinguishable from natural data and yet elicit misclassification from the network. The focus of this thesis is to investigate the space of adversarial examples in hitherto novel applications. We first study Adversarial Training(AT) which is a defense against adversarial examples, in a federated learning setup. Federated learning is a paradigm for multi-round model training over a distributed corpus of agent data. We propose FedDynAT, a novel algorithm for performing AT in a federated setting. Through extensive experimentation, we show that FedDynAT significantly improves both natural and adversarial accuracy, as well as model convergence time by reducing model drift. We next formulate an attack against 3D reconstruction models. While adversarial examples for 2D images and Convolutional Neural Networks have been extensively studied, less attention has been paid to attacking 3D reconstruction models. 3D reconstruction models have been widely applied to various domains, such as e-commerce, architecture, CAD, virtual reality, and medical processes. It is thereby of great importance to explore the vulnerabilities of such 3D models, and design methods to improve their robustness in practice. We propose a novel 3D Spatial-Pixel Joint Optimization attack (3D-SPJO) to generate adversarial 2D input against a 3D Reconstruction model, which reconstructs the attacker specified 3D voxelized grid. We conduct extensive ablation studies to evaluate 3D-SPJO on 3D-R2N2 and Pix2Vox models which are state-of-the-art 3D reconstruction models trained on the ShapeNet dataset.
- Graduation Semester
- 2021-05
- Type of Resource
- Thesis
- Permalink
- http://hdl.handle.net/2142/110735
- Copyright and License Information
- Copyright 2021 Devansh Shah
Owning Collections
Graduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisDissertations and Theses - Computer Science
Dissertations and Theses from the Dept. of Computer ScienceManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…