Enhancing the robustness of machine learning models
Xu, Xiaojun
This item is only available for download by members of the University of Illinois community. Students, faculty, and staff at the U of I may log in with your NetID and password to view the item. If you are trying to access an Illinois-restricted dissertation or thesis, you can request a copy through your library's Inter-Library Loan office or purchase a copy directly from ProQuest.
Permalink
https://hdl.handle.net/2142/121313
Description
Title
Enhancing the robustness of machine learning models
Author(s)
Xu, Xiaojun
Issue Date
2023-06-21
Director of Research (if dissertation) or Advisor (if thesis)
Gunter, Carl A.
Li, Bo
Doctoral Committee Chair(s)
Gunter, Carl A.
Li, Bo
Committee Member(s)
Borisov, Nikita
Zhang, Ce
Department of Study
Computer Science
Discipline
Computer Science
Degree Granting Institution
University of Illinois at Urbana-Champaign
Degree Name
Ph.D.
Degree Level
Dissertation
Keyword(s)
Machine Learning Robustness
Abstract
Machine learning models have recently shown surprisingly good performance in real-world tasks. Therefore, there are growing concerns about whether machine learning models can be robust against different potential threats. In this thesis, we will explore the robustness of machine learning models against adversarial threats in three scenarios that have received relatively less attention from the community. Firstly, we will investigate backdoor attacks, which involve perturbing both the training and evaluation stages. As a countermeasure to such a stealthy and dangerous attack, we present a countermeasure by achieving a binary classification task on neural networks to mitigate this type of threat. Secondly, we will examine the robustness of graph data. We demonstrate the potential threat for discrete edge space manipulation to deceive graph neural networks and make desired actions with stealthy perturbations. We also offer a countermeasure to detect the maliciously injected edges on graph data with an ensemble of multiple models. Finally, we will discuss how model architecture design can provide a robustness guarantee. We present two Lipschitz-constrained models, one for convolution networks and another for Transformer networks. We show that such Lipschitz-constrained models can achieve good certified model robustness. Our work enhances machine learning robustness against various adversarial threats with effective countermeasures.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.