Withdraw
Loading…
Speeding-up graph processing on shared-memory platforms by optimizing scheduling and compute
Heidarshenas, Azin
Loading…
Permalink
https://hdl.handle.net/2142/114094
Description
- Title
- Speeding-up graph processing on shared-memory platforms by optimizing scheduling and compute
- Author(s)
- Heidarshenas, Azin
- Issue Date
- 2021-12-02
- Director of Research (if dissertation) or Advisor (if thesis)
- Torrellas, Josep
- Doctoral Committee Chair(s)
- Torrellas, Josep
- Committee Member(s)
- Chen, Deming
- Hwu, Wen-mei
- Kumar, Rakesh
- Misailovic, Sasa
- Morrison, Adam
- Department of Study
- Electrical & Computer Eng
- Discipline
- Electrical & Computer Engr
- Degree Granting Institution
- University of Illinois at Urbana-Champaign
- Degree Name
- Ph.D.
- Degree Level
- Dissertation
- Keyword(s)
- Shared-memory platforms
- graph processing
- scheduling
- approximate computation
- dynamic graphs
- Abstract
- Graph processing workloads are being widely used in many domains such as computational biology, social network analysis, and financial analysis. As DRAM technology scales down into higher densities, shared-memory platforms gain increasing importance in handling large graph sizes. We study two main categories of graph algorithms from an implementation perspective. Topology-driven algorithms process all vertices of the graph at each iteration, while data-driven algorithms only process those vertices that make a substantial contribution to the output. Furthermore, the performance of a graph algorithm execution can be broken down into three components, namely, pre-processing, compute, and scheduling. For data-driven algorithms, the work of each thread is driven by the dependencies between vertex values that are known only at run-time. Hence, the scheduling will take a significant portion of execution. However, for topology-driven algorithms, the scheduling time is negligible since the work of each thread can be determined at compile-time. In this dissertation, we present three techniques to address the performance bottlenecks in both data-driven and topology-driven algorithms. First, we present Snug, which is a chip-level architecture that mitigates the trade-off between synchronization and wasted work in data-driven algorithms. Second, we present V-Combiner, which is a software-only technique to mitigate the trade-off between performance and accuracy of topology-driven algorithms using novel vertex-merging and recovery mechanisms. Finally, we present KeepCompressed, which is a set of algorithms to speed-up compute for topology-driven algorithms using vertex clustering for dynamic graphs.
- Graduation Semester
- 2021-12
- Type of Resource
- Thesis
- Permalink
- http://hdl.handle.net/2142/114094
- Copyright and License Information
- Copyright 2021 Azin Heidarshenas
Owning Collections
Graduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…