Detailed Information

Cited 5 time in webofscience Cited 5 time in scopus
Metadata Downloads

Multimodal Emotion Detection via Attention-Based Fusion of Extracted Facial and Speech Featuresopen access

Authors
Mamieva, DilnozaAbdusalomov, Akmalbek BobomirzaevichKutlimuratov, AlpamisMuminov, BahodirWhangbo, Taeg Keun
Issue Date
Jun-2023
Publisher
MDPI
Keywords
CNN; multimodal emotion recognition; facial feature; speech feature; attention mechanism
Citation
SENSORS, v.23, no.12
Journal Title
SENSORS
Volume
23
Number
12
URI
https://scholarworks.bwise.kr/gachon/handle/2020.sw.gachon/88595
DOI
10.3390/s23125475
ISSN
1424-8220
Abstract
Methods for detecting emotions that employ many modalities at the same time have been found to be more accurate and resilient than those that rely on a single sense. This is due to the fact that sentiments may be conveyed in a wide range of modalities, each of which offers a different and complementary window into the thoughts and emotions of the speaker. In this way, a more complete picture of a person's emotional state may emerge through the fusion and analysis of data from several modalities. The research suggests a new attention-based approach to multimodal emotion recognition. This technique integrates facial and speech features that have been extracted by independent encoders in order to pick the aspects that are the most informative. It increases the system's accuracy by processing speech and facial features of various sizes and focuses on the most useful bits of input. A more comprehensive representation of facial expressions is extracted by the use of both low- and high-level facial features. These modalities are combined using a fusion network to create a multimodal feature vector which is then fed to a classification layer for emotion recognition. The developed system is evaluated on two datasets, IEMOCAP and CMU-MOSEI, and shows superior performance compared to existing models, achieving a weighted accuracy WA of 74.6% and an F1 score of 66.1% on the IEMOCAP dataset and a WA of 80.7% and F1 score of 73.7% on the CMU-MOSEI dataset.
Files in This Item
There are no files associated with this item.
Appears in
Collections
IT융합대학 > 컴퓨터공학과 > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher ,  photo

,
College of IT Convergence (Department of Software)
Read more

Altmetrics

Total Views & Downloads

BROWSE