This thesis will describe our approach at implementing transformations between neural networks, as they were presented in "Transformations between deep neural networks" by Tom Bertalan, Felix Dietrich, and Ioannis G. Kevrekidis, on established, pre-trained neural networks, like Wav2Vec2, XLNet, and other similar networks. By creating transformations between these neural networks, we aim to establish equivalence classes between widely used, pre-trained models. For our implementation of the transformations, we will use the approach established in the aforementioned paper, namely diffusion maps with a Mahalanobis-like metric. We will also use Whitney’s theorem to estimate the number of measurements required from each neural network to reconstruct all features from the other network. For this purpose, we aim to use different models, which were trained to tackle tasks such as fault detection on sequence data, speech recognition, and text interpretation. The Models used for the representational experiments were dependent on the availability of pre-trained neural networks, but we will also present the procedures that aim to implement the transformation between neural networks that work with popular data types, such as vibration data, text data, sound data.
«
This thesis will describe our approach at implementing transformations between neural networks, as they were presented in "Transformations between deep neural networks" by Tom Bertalan, Felix Dietrich, and Ioannis G. Kevrekidis, on established, pre-trained neural networks, like Wav2Vec2, XLNet, and other similar networks. By creating transformations between these neural networks, we aim to establish equivalence classes between widely used, pre-trained models. For our implementation of the transf...
»