CNN-BASED MULTIMODAL EMOTION DETECTION: INTEGRATING SPEECH RECOGNITION AND FACIAL EXPRESSION ANALYSIS

Authors

  • Mohammad Amanullah Khan, Dhiravath Sumitha, Swathi Katta Author

DOI:

https://doi.org/10.48047/

Keywords:

Human-Computer Interaction, Multimodal Emotion Detection, Rule-Based Systems, Convolutional Neural Networks.

Abstract

Accurate identification and interpretation of human emotions are critical in the modern world of affective computing and human-computer interaction. The paper presents a state-of-the-art multimodal emotion detection system that incorporates the most recent methods for facial expression analysis, speech recognition, and video processing. Conventional techniques for identifying emotions have shown shortcomings, especially when it comes to accurately representing the complex and ever changing emotional states of people. Taking these difficulties into account, this study aims to create a solid framework that effectively blends several modalities to improve the precision of emotion identification. 

Downloads

Download data is not yet available.

References

Bjorn S, Stefan S, Anton B, Alessandro V, Klaus S, Fabien R, Mohamed C, Felix W, Florian E, Erik M, Marcello M, Hugues S, Anna P, Fabio V, Samuel K (2013) Interspeech 2013 Computational Paralinguistics Challenge: Social Signals, Conflict, Emotion, Autism

Deepak G, Joonwhoan L (2013) Geometric feature-based facial expression recognition in image sequences using multi-class AdaBoost and support vector machines. Sensors 13:7714–7734.

Domínguez-Jiménez JA, Campo-Landines KC, Martínez-Santos J, Delahoz EJ, Contreras-Ortiz S (2020) A machine learning model for emotion recognition from physiological signals. Biomed Signal Proces 55:101646

Downloads

Published

2020-08-01

How to Cite

Mohammad Amanullah Khan, Dhiravath Sumitha, Swathi Katta. (2020). CNN-BASED MULTIMODAL EMOTION DETECTION: INTEGRATING SPEECH RECOGNITION AND FACIAL EXPRESSION ANALYSIS. History of Medicine, 6(1), 35-42. https://doi.org/10.48047/