Graph Neural Networks (GNNs) have become the standard tool for learning on structured data such as molecules, citation networks, and social networks. Their training typically depends on iterative backpropagation through several message-passing layers, which can be quite computationally demanding. By contrast, Sampling Where It Matters (SWIM) is a forward-only random-feature method that avoids gradient updates completely. It samples pairs from training data, builds hidden weights and biases from these, and only fits a linear regression layer in closed form. Prior work has shown that SWIM can achieve competitive accuracy with much lower training times in fully connected networks. Motivated by these results, we investigate if these advantages can carry over to GNNs. In this thesis, we extend SWIM to GNN pipelines in a framework called swimgnn, which integrates SWIM layers with message passing and pooling operations. We evaluate SWIM against ADAM-trained baselines on two core tasks of GNNs, graph classification and node classification. Experiments were conducted on seven benchmark datasets, under two evaluation settings: accuracy-controlled, where training times are compared at similar validation accuracy, and time-controlled, where accuracy is compared under equal time budgets. Our results show that a SWIM trained pipeline can reach competitive accuracies on graph classification benchmarks while being orders of magnitude faster than the ADAM optimizer. On node classification, SWIM’s results were more mixed and underperformed ADAM at convergence. From these findings we see that SWIM can be useful when runtime efficiency is critical, while iterative training techniques may be preferable when peak accuracy is required. Overall, our work demonstrates that sampling based training can be a practical alternative to backpropagation for many GNN tasks and motivates further exploration of random-feature methods in graph learning.
«
Graph Neural Networks (GNNs) have become the standard tool for learning on structured data such as molecules, citation networks, and social networks. Their training typically depends on iterative backpropagation through several message-passing layers, which can be quite computationally demanding. By contrast, Sampling Where It Matters (SWIM) is a forward-only random-feature method that avoids gradient updates completely. It samples pairs from training data, builds hidden weights and biases from...
»