In this manuscript, the topic of multi-corpus Speech Emotion Recognition (SER) is approached from a deep transfer learning perspective. A large corpus of emotional speech data, EMOSET, is assembled from a number of existing Speech Emotion Recognition (SER) corpora. In total, EMOSET contains 84181 audio recordings from 26 SER corpora with a total duration of over 65 hours. The corpus is then utilised to create a novel framework for multi-corpus SER and general audio recognition, namely EMONET. A combination of a deep ResNet architecture and residual adapters is transferred from the field of multi-domain visual recognition to multi-corpus SER on EMOSET. The introduced residual adapter approach enables parameter efficient training of a multi-domain SER model on all 26 corpora. A shared model with only 3.5 times the number of parameters of a model trained on a single database leads to increased performance for 21 of the 26 corpora in EMOSET. Using repeated training runs and Almost Stochastic Order with significance level of a 1/4 0:05, these improvements are further significant for 15 datasets while there are just three corpora that see only significant decreases across the residual adapter transfer experiments. Finally, we make our EMONET framework publicly available for users and developers at https://github.com/EIHW/EmoNet.
«
In this manuscript, the topic of multi-corpus Speech Emotion Recognition (SER) is approached from a deep transfer learning perspective. A large corpus of emotional speech data, EMOSET, is assembled from a number of existing Speech Emotion Recognition (SER) corpora. In total, EMOSET contains 84181 audio recordings from 26 SER corpora with a total duration of over 65 hours. The corpus is then utilised to create a novel framework for multi-corpus SER and general audio recognition, namely EMONET. A...
»