Neural networks have become dominant in machine learning due to their ability to automatically learn complex patterns. They are often trained using iterative gradient-based optimizers, with Adam being one of the most widely used methods. Sampled Networks, introduced by Bolager et al., offer an alternative method where network parameters are constructed directly by sampling data point pairs, eliminating the need for iterative optimization and enhancing efficiency and interpretability.
The primary contribution of this thesis is an algorithm to convert traditionally trained neural networks into equivalent sampled networks, in order to provide the interpretability and transparency that come with these networks. The emphasis is on converting a two-layer neural network employing the ReLU activation function into a sampled network. The main objective for the converted sampled network is to closely match the trained neural network in terms of its parameters, namely weights and biases, as well as its output. For the conversion algorithm, we introduce multiple approaches for converting both the hidden and the output layer parameters of the trained network. Numerical experiments provide a comparative analysis of each proposed approach, comparing the network parameters and outputs between the original trained network and the converted sampled network, as well as the runtime of the conversion algorithm.
«
Neural networks have become dominant in machine learning due to their ability to automatically learn complex patterns. They are often trained using iterative gradient-based optimizers, with Adam being one of the most widely used methods. Sampled Networks, introduced by Bolager et al., offer an alternative method where network parameters are constructed directly by sampling data point pairs, eliminating the need for iterative optimization and enhancing efficiency and interpretability.
The primar...
»