Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Emotion Recognition Implementation with Multimodalities of Face, Voice and EEGEmotion Recognition Implementation with Multimodalities of Face, Voice and EEG

Other Titles
Emotion Recognition Implementation with Multimodalities of Face, Voice and EEG
Authors
Miracle UdurumeAngela C임완수김귀곤
Issue Date
Sep-2022
Publisher
한국정보통신학회
Keywords
Emotion recognition; Multimodality; Multithreading; Real-time implementation
Citation
Journal of Information and Communication Convergence Engineering, v.20, no.3, pp 174 - 180
Pages
7
Journal Title
Journal of Information and Communication Convergence Engineering
Volume
20
Number
3
Start Page
174
End Page
180
URI
https://scholarworks.bwise.kr/kumoh/handle/2020.sw.kumoh/26118
DOI
10.56977/jicce.2022.20.3.174
ISSN
2234-8255
2234-8883
Abstract
Emotion recognition is an essential component of complete interaction between human and machine. The issues related to emotion recognition are a result of the different types of emotions expressed in several forms such as visual, sound, and physiological signal. Recent advancements in the field show that combined modalities, such as visual, voice and electroencephalography signals, lead to better result compared to the use of single modalities separately. Previous studies have explored the use of multiple modalities for accurate predictions of emotion; however the number of studies regarding real-time implementation is limited because of the difficulty in simultaneously implementing multiple modalities of emotion recognition. In this study, we proposed an emotion recognition system for real-time emotion recognition implementation. Our model was built with a multithreading block that enables the implementation of each modality using separate threads for continuous synchronization. First, we separately achieved emotion recognition for each modality before enabling the use of the multithreaded system. To verify the correctness of the results, we compared the performance accuracy of unimodal and multimodal emotion recognitions in real-time. The experimental results showed real-time user emotion recognition of the proposed model. In addition, the effectiveness of the multimodalities for emotion recognition was observed. Our multimodal model was able to obtain an accuracy of 80.1% as compared to the unimodality, which obtained accuracies of 70.9, 54.3, and
Files in This Item
Appears in
Collections
School of Electronic Engineering > 1. Journal Articles
Department of Business Administration > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetrics

Total Views & Downloads

BROWSE