Withdraw
Loading…
Active heterogeneous graph neural networks with per-step meta-q-learning
Zhang, Yuheng
Loading…
Permalink
https://hdl.handle.net/2142/115599
Description
- Title
- Active heterogeneous graph neural networks with per-step meta-q-learning
- Author(s)
- Zhang, Yuheng
- Issue Date
- 2022-04-26
- Director of Research (if dissertation) or Advisor (if thesis)
- Tong, Hanghang
- Department of Study
- Computer Science
- Discipline
- Computer Science
- Degree Granting Institution
- University of Illinois at Urbana-Champaign
- Degree Name
- M.S.
- Degree Level
- Thesis
- Keyword(s)
- Active learning
- Meta-Reinforcement learning
- Heterogeneous graph neural network
- Abstract
- Recent years have witnessed the superior performance of heterogeneous graph neural networks (HGNNs) in dealing with heterogeneous information networks (HINs). Nonetheless, the success of HGNNs often depends on the availability of sufficient labeled training data, which can be very expensive to obtain in real scenarios. Active learning provides an effective solution to tackle the data scarcity challenge. Through actively acquiring the most informative samples, the performance of machine learning models could be greatly boosted with limited annotation cost. For the vast majority of the existing work regarding active learning on graphs, they mainly focus on homogeneous graphs, and thus fall in short or even become inapplicable on HINs. In this thesis, we study the active learning problem with HGNNs and propose a novel meta-reinforced active learning framework MetRA. We formulate the active learning process as a Markov Decision Process (MDP) and employ deep Q-learning to learn the labeling policy. Previous reinforced active learning algorithms train the policy network on labeled source graphs and directly transfer the policy to the target graph without any adaptation. To better exploit the information from the target graph in the adaptation phase, we propose a novel policy transfer algorithm based on meta-Q-learning termed per-step MQL. Specifically, we measure the similarity between the transitions from the meta-training replay buffer and the target graph state in each time step. The source transitions with high similarity to the target graph will be recycled to adapt our policy using off-policy updates. It is noteworthy to mention that our per-step MQL algorithm could be generalized to other reinforced active learning frameworks. Empirical evaluations on both HINs and homogeneous graphs demonstrate the effectiveness and efficiency of our proposed framework. The improvement over the best baseline is up to 7% in Micro-F1.
- Graduation Semester
- 2022-05
- Type of Resource
- Thesis
- Copyright and License Information
- Copyright 2022 Yuheng Zhang
Owning Collections
Graduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…