Massively parallel message passing on a GPU for graphical model inference
Martini, Amr Mamoun
Loading…
Permalink
https://hdl.handle.net/2142/108728
Description
Title
Massively parallel message passing on a GPU for graphical model inference
Author(s)
Martini, Amr Mamoun
Issue Date
2020-07-22
Director of Research (if dissertation) or Advisor (if thesis)
Schwing, Alex G
Department of Study
Electrical & Computer Eng
Discipline
Electrical & Computer Engr
Degree Granting Institution
University of Illinois at Urbana-Champaign
Degree Name
M.S.
Degree Level
Thesis
Keyword(s)
graphical models
GPU
statistical inference
computational inference
coordinate descent
Abstract
Graphical model inference is fundamental to many problems across disciplines. However, its combinatorial nature makes it computationally challenging. For more effective inference, message passing algorithms that expose significant parallelism have been implemented to exploit graphics processing units (GPUs), albeit often tackling specific graphical model structures such as directed acyclic graphs (DAGs), grids, uniform state spaces, and pairwise models. All those implementations emphasize the importance of load balancing irregular graphs in order to fully utilize GPU parallelism. However, they do not formalize the problems and instead give ad hoc solutions. In contrast, we formalize load balancing of message passing for general, irregular graphs as a minimax problem and develop an algorithm to solve it efficiently. We show that our implementation permits scaling of message passing to meet the demands of current problems of interest in machine learning and computer vision, achieving significant speedups over state of the art.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.