Withdraw
Loading…
A framework for programming and optimizing temporal graph neural networks
Wang, Yufeng
Loading…
Permalink
https://hdl.handle.net/2142/120311
Description
- Title
- A framework for programming and optimizing temporal graph neural networks
- Author(s)
- Wang, Yufeng
- Issue Date
- 2023-04-24
- Director of Research (if dissertation) or Advisor (if thesis)
- Mendis, Charith
- Department of Study
- Computer Science
- Discipline
- Computer Science
- Degree Granting Institution
- University of Illinois at Urbana-Champaign
- Degree Name
- M.S.
- Degree Level
- Thesis
- Keyword(s)
- temporal graph neural networks
- temporal graphs
- dynamic graphs
- redundancy-aware optimizations
- memoization
- deduplication
- precomputation
- framework
- Abstract
- In recent years, Temporal Graph Neural Networks (TGNNs) have gained in popularity and adoption across application domains for learning predictive models on temporal graphs. Unlike earlier GNN models that assume a static graph, TGNNs are able to learn on evolving graph structure by employing time-encoding or memory-based techniques to capture the temporal patterns of the graph, attaining superior predictive performance than their static counterparts. Machine learning researchers are constantly experimenting with new designs for these TGNN models. There exists prior work on frameworks and optimizations for TGNNs, but they are restricted to discrete-time graphs that lack the richer temporal information of continuous-time graphs, and they do not exploit unique temporal characteristics. This leads to suboptimal model implementations, longer training time and inference latency, as well as difficult-to-use programming interfaces and non-composable abstractions. In this thesis, we address these issues by introducing our TGLite framework design. We outline a lightweight framework that provides easy-to-use abstractions and composable operators for programming and optimizing TGNNs for continuous-time temporal graphs. First, we present a set of redundancy-aware optimizations tailored to these TGNN models, which exploits unique characteristics in their temporal embedding and time-encoding computations. Next, we discuss how these optimizations, as well as other common TGNN computation patterns, are implemented as standard operators in the TGLite framework, along with details of the novel abstractions that they operate on. Our evaluation results indicate that our optimization techniques are effective for the TGAT model in accelerating inference performance, with geomean speedups of 4.88x on CPU and 2.93x on GPU. When generalizing these optimizations to other TGNN models as part of our TGLite framework, we show that TGLite can outperform strong baseline frameworks like TGL with geomean speedups of 1.86x for per-epoch training time and 1.84x for inference. As we continue the development of the TGLite prototype framework, we intend to improve current functionality and incorporate further optimizations while retaining the programmability and clean abstractions of the framework.
- Graduation Semester
- 2023-05
- Type of Resource
- Thesis
- Copyright and License Information
- Copyright 2023 Yufeng Wang
Owning Collections
Graduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…