Graph Neural Networks (GNNs) are popular AI models for representing real-world data as graphs. However, they lack transparency. This thesis argues for the need for GNN explanation methods with naturalness, sensitivity, reference to graph topology, and fidelity. Three explanation models were introduced that combine symbolic and sub-symbolic elements, providing subgraph explanations, non-ontological and ontological explanations. These were evaluated on a molecular chemistry and cybersecurity datasets. Combining sub-symbolic and symbolic methods increases GNN explainability, especially in fidelity, sensitivity, and graph topology reference. Furthermore, it is shown that choosing the best explanation method for GNNs depends on use case, domain knowledge, and the need for local or global explanations.
«
Graph Neural Networks (GNNs) are popular AI models for representing real-world data as graphs. However, they lack transparency. This thesis argues for the need for GNN explanation methods with naturalness, sensitivity, reference to graph topology, and fidelity. Three explanation models were introduced that combine symbolic and sub-symbolic elements, providing subgraph explanations, non-ontological and ontological explanations. These were evaluated on a molecular chemistry and cybersecurity datas...
»