Probabilistic graphical models (PGMs) are powerful frameworks for modeling interactions between random variables. The two major inference tasks on PGMs are marginal probability inference and maximum-a-posteriori (MAP) inference. Exact inference on PGMs is intractable, hence approximation algorithms, such as belief propagation, are proposed for practical applications.
Recently Graphical Neural Networks (GNNs) are shown to outperform BP on small-scale loopy graphs. GNN computes a more general function on each node using neural networks, and learns the exact distribution of small loop-free and loopy graphs. As BP is exact on loop-free graphs and graphs with exactly one loop, GNN performs worse than BP on these graphs, but outperforms BP on graphs with more loops as BP’s performance degrades.
We propose a simplified GNN architecture, GNN-Mimic-BP, which outperforms GNN by orders of magnitude on loop-free graphs. In fact, with the simplification, GNN-Mimic-BP enables the architecture to mimic BP exactly on loop-free graphs. We then combine the simplified architecture with enhanced information of short loops in the graph. The resulting architecture outperforms the original GNN on both classic graphs ranging from loop-free to complete, as well as random graphs with a wide range of edge density.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.