Improving robust accuracy through gradient boosting with ADP
Fan, Zhicong
This item is only available for download by members of the University of Illinois community. Students, faculty, and staff at the U of I may log in with your NetID and password to view the item. If you are trying to access an Illinois-restricted dissertation or thesis, you can request a copy through your library's Inter-Library Loan office or purchase a copy directly from ProQuest.
Permalink
https://hdl.handle.net/2142/110279
Description
Title
Improving robust accuracy through gradient boosting with ADP
Author(s)
Fan, Zhicong
Contributor(s)
Li, Bo
Issue Date
2021-05
Keyword(s)
Adversarial Machine Learning
Gradient Boosting
Ensemble Model
Adaptive Diversity Promoting Strategy
XGBoost
Deep Neural Networks
Abstract
In adversarial examples, humans can easily classify the images even though the images are
corrupted. However, recently, some related work has shown that deep neural networks are
vulnerable to adversarial attacks [1]. To increase the robustness against adversarial attacks,
many methods were carried out, such as k-Winners [2], robust sparse Fourier Transform [3], and
Compact Convolution [4]. Many of the defense strategies aimed to mark the gradient, train different
classifiers, and use new loss calculations. In the thesis, several ensemble models were trained by
applying both typical gradient boosting and enlarging the diversity among base models to improve
their robustness against adversarial attacks. The purpose is to show that making adversarial
examples difficult to transfer among individual members would cause the state-of-the-art attacking
algorithms to fail to attack the trained robust ensemble model to a certain extent.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.