Withdraw
Loading…
Multi-model-based defense against adversarial examples for neural networks
Srisakaokul, Siwakorn
Loading…
Permalink
https://hdl.handle.net/2142/108026
Description
- Title
- Multi-model-based defense against adversarial examples for neural networks
- Author(s)
- Srisakaokul, Siwakorn
- Issue Date
- 2020-05-11
- Director of Research (if dissertation) or Advisor (if thesis)
- Xie, Tao
- Li, Bo
- Department of Study
- Computer Science
- Discipline
- Computer Science
- Degree Granting Institution
- University of Illinois at Urbana-Champaign
- Degree Name
- M.S.
- Degree Level
- Thesis
- Keyword(s)
- security and privacy, machine learning
- Abstract
- Neural networks recently have been used to solve many real-world tasks such as image recognition and can achieve high effectiveness on these tasks. Despite being popularly used in many applications, neural network models have been found to be vulnerable to adversarial examples, i.e., carefully crafted examples aiming to mislead machine learning models. Adversarial examples can pose potential risks on safety/security-critical applications. Existing defense approaches are still vulnerable to emerging attacks, especially in a white-box attack scenario. In this thesis, we focus on mitigating the adversarial attacks by improving machine learning models to be more robust against those attacks. In particular, we propose a new defense approach, named MulDef, based on robustness diversity. Our approach consists of (1) a general defense framework based on diverse models and (2) a technique for generating diverse models to achieve high defense capability. Our framework generates multiple models (constructed from the target model) to form a model family. The model family is designed to achieve robustness diversity (i.e., an adversarial example crafted to attack one model may not succeed in attacking other models in the family). At runtime, a model is randomly selected from the family to process each input example. Our evaluation results show that MulDef (with only up to 5 models in the family) can substantially improve the target model's robustness against adversarial examples by 19-78% in a white-box attack scenario among MNIST, CIFAR-10, and Tiny ImageNet datasets, while maintaining similar accuracy on legitimate examples. Our general framework can also inspire rich future research to construct a desirable model family achieving higher robustness diversity.
- Graduation Semester
- 2020-05
- Type of Resource
- Thesis
- Permalink
- http://hdl.handle.net/2142/108026
- Copyright and License Information
- Copyright 2020 Siwakorn Srisakaokul
Owning Collections
Graduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisDissertations and Theses - Computer Science
Dissertations and Theses from the Dept. of Computer ScienceManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…