After its inception, emotion recognition or affective computing has increasingly become an active research topic due to its broad applications. The corresponding computational models have gradually migrated from statistically shallow models to neural-network-based deep models, which can significantly boost the performance of emotion recognition and consistently achieve the best results on different benchmarks, and thus has been considered the first option for emotion recognition. However, the debut of large language models (LLMs), such as ChatGPT and GPT4, has remarkably astonished the world due to their emerged capabilities of zero/few-shot learning, in-context learning (ICL), chain-of-thought, and others that are never shown in previous deep models. In the present article, we comprehensively investigate how the LLMs perform in emotion recognition in terms of diverse aspects, including ICL, few-shot prompting, accuracy, generalization, and explanation. Moreover, we offer some insights and pose other potential challenges, hoping to ignite broader discussions about enhancing emotion recognition in the new era of advanced and more generalized models.
«
After its inception, emotion recognition or affective computing has increasingly become an active research topic due to its broad applications. The corresponding computational models have gradually migrated from statistically shallow models to neural-network-based deep models, which can significantly boost the performance of emotion recognition and consistently achieve the best results on different benchmarks, and thus has been considered the first option for emotion recognition. However, the de...
»