Deploy Dedicated GPU server to run AI models

Deploy Model
Skip to main content
Authored By: Minh N. Vu, My T. Thai

PGM-Explainer: Probabilistic Graphical Model Explanations for Graph Neural Networks

Jan 27, 2024

1. Abstract

In the realm of Graph Neural Networks (GNNs), the graph structure plays a pivotal role in the learning of node representations. However, interpreting the outcomes and predictions from GNN modelling can be a daunting task. Hence, we propose the PGM-Explainer; a Probabilistic Graphical Model (PGM) explainer for GNNs. This agnostic model, when faced with a prediction need to be explained, identifies critical graph components and generates a PGM that in turn approximates the prediction. The PGM-Explainer is particularly beneficial in illustrating the dependencies of explained features via conditional probabilities unlike conventional GNN explainers, which operate on linear functions of explained features. Our theoretical analysis shows that the PGM generated by PGM-Explainer incorporates all the statistical information of the target prediction and outperforms existing models in various tasks, as supported by empirical evidence from experiments on synthetic and real-world datasets.

2. Introduction

GNNs have proven their worth in applications across diverse fields where datasets take the form of graphs. Examples include social networks, citation networks, knowledge graphs, and biological networks. Any increase in the transparency and understanding of how GNNs make decisions, in turn, increases trust in the model. Moreover, knowledge about the model's behaviours helps us identify scenarios in which the systems may fail. This is particularly crucial for safety in real-world tasks where not all possible scenarios are testable. Furthermore, it is a necessity to understand whether a model is biased in its decision-making due to privacy and fairness reasons. GNNExplainer, was introduced recently, which explains GNNs using a mutual-information approach. However, very little is known about the quality of GNNExplainer and it is uncertain whether mutual-information is apt for this task. Most importantly, GNNExplainer needs an explained model to calculate its prediction on fractional adjacency matrix, which is unavailable in most GNN libraries such as Pytorch and DGL.

3. Introduction to PGM-Explainer

We propose a PGM model-agnostic explainer for GNNs, named PGM-Explainer. This tool gauges the explanations of a GNN's prediction as a simplified interpretable Bayesian network approximating the said prediction. A Bayesian network does not rely on the linear-independence assumption of explained features, which allows PGM-Explainer to illustrate the interdependence among explained features and provide deeper explanations for GNNs' predictions than the extant additive feature attribution methods. Besides, our theoretical analysis shows that, provided a perfect map exists for the data sampled via perturbing the GNN's input, a Bayesian network curated by PGM-Explainer will always include the Markov-blanket of the target prediction.

4. PGM-Explainer: Probabilistic Graphical Model Explanations for GNNs

PGM-Explainer incorporates three primary steps: data generation, variables selection, and structure learning. Let's take a GNN with layers that use the prediction on an induced graph by feeding the graph through the GNN. In the node classification task, the realization of node variable v = {s_v, I (prediction|G(s)) _v } is recorded into D_t. The structure learning step fetches the filtered data from the previous step and generates a PGM explanation. For detailed information on variable selection, explanation model, explorations with PGM as an interpretable Domain, and the three Major Components of PGM-Explainer, we encourage you to dive deeper into our research paper.

5. Conclusion

In conclusion, PGM-Explainer is a pioneering step in explaining GNNs and its actions. This tool could potentially be useful to data scientists and network modelers when it comes to understanding, predicting and decoding complex GNNs, hence broadening the accessibility and acceptance of GNNs in diverse fields.