Affective Computing establishes the basis of natural future Human-Computer Communication. Within this context this thesis focuses on a variety of innovative approaches towards a robust Automatic Emotion Recognition out of spoken and manual Interaction. On the signal layer evolutionary generation and selection of novel features are introduced. In view of optimal performance extensive test-runs are carried out comparing dynamic modelling and statistical time-series analysis as well as diverse classification and ensemble construction techniques. The interpretation of the spoken content of affective utterances boosts overall robustness and enables emotion recognition out of written text. An excurse in the field of Automatic Speech Recognition and String-Matching covers the problem of text capturing. Finally, a synergistic multimodal fusion of all information is realised. Three scenarios – robust speech processing, music information retrieval and in-car interaction – demonstrate applicability in the field and transfer of acquired methods.
«