Withdraw
Loading…
Processing graphs and sparse matrices efficiently
Yesil, Serif
Loading…
Permalink
https://hdl.handle.net/2142/115730
Description
- Title
- Processing graphs and sparse matrices efficiently
- Author(s)
- Yesil, Serif
- Issue Date
- 2022-04-20
- Director of Research (if dissertation) or Advisor (if thesis)
- Torrellas, Josep
- Doctoral Committee Chair(s)
- Torrellas, Josep
- Committee Member(s)
- Padua, David
- Hwu, Wen-mei
- Mendis, Charith
- Morrison, Adam
- Moreira, Jose E.
- Department of Study
- Computer Science
- Discipline
- Computer Science
- Degree Granting Institution
- University of Illinois at Urbana-Champaign
- Degree Name
- Ph.D.
- Degree Level
- Dissertation
- Keyword(s)
- Graph Analytics
- Priority Scheduling
- Sparse Matrix Primitives
- Sparse Matrix-Vector Multiplications
- Sparse Matrix-Matrix Multiplications
- Abstract
- Graph analytics applications are hard to optimize. Their performance is highly dependent on their input graph, which can vary significantly. One effective approach to solve graph analytics applications is to represent the input graph as a sparse matrix, and use the operators of sparse linear algebra. There are numerous methods developed to accelerate the resulting sparse matrix computations. Unfortunately, no single method can outperform all others for all types of matrices. In order to obtain high performance on a wide range of matrices, we need to be able to determine the best method to use for each individual matrix. In this work, we target automated optimization of graph applications. First, we consider the data-driven graph algorithms with priority scheduling. We perform a detailed performance analysis of various priority schedulers and observe that the performance is application and input graph-dependent. Based on our insights, we develop the Priority Merging on Demand (PMOD) priority scheduler, a dynamic mechanism that can adapt to graph and application characteristics at runtime. Next, we focus on topology-driven graph analytics, where all vertices of a graph are processed in every iteration. These applications can be represented as generalized Sparse Matrix-Vector multiplication (SpMV) operations. However, SpMV style computations are challenging with emerging power-law graphs due to their skewed and highly irregular memory behavior. To address these challenges, we propose Locality-Aware Vectorization (LAV), which can improve the locality and efficiency of vectorization for power-law graph analytics. Furthermore, we identify that the optimizations applied in LAV form an optimization search space encompassing many different SpMV methods. Moreover, various combinations of these optimizations can yield the best performance for matrices and graphs from different application domains. For this reason, we develop Matrix-Vector Performance Predictor (MVPP): a machine learning-based performance prediction model for SpMV predicting the best strategy for individual matrices. Finally, we note that many applications can benefit from increasing compute intensity provided by Sparse-Matrix Matrix Multiplication (SpMM). In our final work, we develop Dense Dynamic Blocks (DDB), a hybrid approach to accelerate SpMM by utilizing both vector and emerging matrix units in recent processors. However, like SpMV, we observe that not all matrices benefit from DDB, and the best SpMM method should be selected on a per-matrix basis. Hence, we develop SpMM Optimizer (SpMM-OPT), a machine learning approach to select the best SpMM strategy in functional unit-rich processors.
- Graduation Semester
- 2022-05
- Type of Resource
- Thesis
- Copyright and License Information
- Copyright 2022 Serif Yesil
Owning Collections
Graduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…