Withdraw
Loading…
Algorithmic foundation of fair graph mining
Kang, Jian
Loading…
Permalink
https://hdl.handle.net/2142/121924
Description
- Title
- Algorithmic foundation of fair graph mining
- Author(s)
- Kang, Jian
- Issue Date
- 2023-07-31
- Director of Research (if dissertation) or Advisor (if thesis)
- Tong, Hanghang
- Doctoral Committee Chair(s)
- Tong, Hanghang
- Committee Member(s)
- Han, Jiawei
- Maciejewski, Ross
- Zhao, Han
- Department of Study
- Computer Science
- Discipline
- Computer Science
- Degree Granting Institution
- University of Illinois at Urbana-Champaign
- Degree Name
- Ph.D.
- Degree Level
- Dissertation
- Keyword(s)
- Graph mining
- algorithmic fairness
- trustworthy artificial intelligence
- Abstract
- In an increasingly connected world, graph mining plays a fundamental role in many real-world applications, such as financial fraud detection, drug discovery, traffic prediction, and so on. Years of research in this area have developed a wealth of theories, algorithms, and systems that are successful in answering what/who types of questions, e.g., who is most influential in a social network? What item should we recommend to a user? Despite the remarkable progress in graph mining, unfairness often occurs in many graph mining tasks. As such, a fundamental question largely remains nascent: how can we make graph mining process and its results fair? To answer this question, it is crucial to propose a paradigm shift, from answering what and who to answering how and why. Four desired properties are called for to build an algorithmic foundation of fair graph mining, including: utility that promises strong empirical performance in the mining task, fairness that avoids discriminatory performances over diverse sensitive groups or individuals, robustness that enhances the resilience toward noise or adversarial activities in the complex world, and transparency that renders the accountability and explainability of graph mining algorithms. The tensions among the desired properties require us to address three key challenges, namely the auditing challenge, the debiasing challenge, and the safeguarding challenge. First, the auditing challenge requires to address the tension between utility and transparency by understanding how the mining results of a given graph mining model relate to the input graph. Second, the debiasing challenge asks for balancing the trade-off between utility and fairness so as to ensure fairness on graph mining without much sacrifice on its utility. Third, the safeguarding challenge connects utility, fairness, robustness, and transparency together, and studies the tensions among them, which could help the deployment of fair graph mining techniques in the real world. The theme of my Ph.D. research is to build an algorithmic foundation of fair graph mining by developing computational models underpinning all three pillars, namely auditing, debiasing, and safeguarding, to address these key challenges. First, for auditing, we develop a family of algorithms AURORA to audit PageRank algorithm from the edge, node, and subgraph level, and a generic algorithmic framework N2N that audits a variety of graph mining algorithms from the optimization perspective. Moreover, we develop JuryGCN, which is the first frequentist-based approach to quantify node uncertainty of graph convolutional network without any epoch(s) of model training. JuryGCN is proven to be useful in both active learning on node classification and semi-supervised node classification, and achieves the best effectiveness and lowest memory usage than the competitors. Second, for debiasing, we offer the first systematic study of individual fairness on graph mining (InFoRM), including the measurement, mitigation strategies, and cost. We also design a family of algorithms RawlsGCN to debias degree unfairness by analyzing its mathematical root cause. Moreover, we ensure fairness among intersectional groups from the information-theoretic perspective. Third, for safeguarding, we explore the adversarial robustness of fair graph mining algorithms by attacking them with a meta learning-based attacking framework named FATE. The developed framework is broadly applicable to various fairness definitions and graph learning models, as well as arbitrary choices of manipulation operations. We also conduct analysis on the poisoned edges to reveal edges with which property would contribute most to the bias amplification on graph neural networks.
- Graduation Semester
- 2023-12
- Type of Resource
- Thesis
- Copyright and License Information
- Copyright 2023 Jian Kang
Owning Collections
Graduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…