Emotion Recognition Using a Glasses-Type Wearable Device via Multi-Channel Facial Responses
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kwon, Jangho | - |
dc.contributor.author | Ha, Jihyeon | - |
dc.contributor.author | Kim, Da-Hye | - |
dc.contributor.author | Choi, Jun Won | - |
dc.contributor.author | Kim, Laehyun | - |
dc.date.accessioned | 2022-07-06T11:55:43Z | - |
dc.date.available | 2022-07-06T11:55:43Z | - |
dc.date.created | 2021-12-08 | - |
dc.date.issued | 2021-10 | - |
dc.identifier.issn | 2169-3536 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/140674 | - |
dc.description.abstract | We present a glasses-type wearable device to detect emotions from a human face in an unobtrusive manner. The device is designed to gather multi-channel responses from the user's face naturally and continuously while he/she is wearing it. The multi-channel facial responses consist of local facial images and biosignals including electrodermal activity (EDA) and photoplethysmogram (PPG). We had conducted experiments to determine the optimal positions of EDA sensors on the wearable device because EDA signal quality is very sensitive to the sensing position. In addition to the physiological data, the device can capture the image region representing local facial expressions around the left eye via a built-in camera. In this study, we developed and validated an algorithm to recognize emotions using multi-channel responses obtained from the device. The results show that the emotion recognition algorithm using only local facial images has an accuracy of 76.09% at classifying emotions. Using multi-channel data including EDA and PPG, this accuracy was increased by 8.46% compared to using the local facial expression alone. This glasses-type wearable system measuring multi-channel facial responses in a natural manner is very useful for monitoring a user's emotions in daily life, which has a huge potential for use in the healthcare industry. | - |
dc.language | 영어 | - |
dc.language.iso | en | - |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
dc.title | Emotion Recognition Using a Glasses-Type Wearable Device via Multi-Channel Facial Responses | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Choi, Jun Won | - |
dc.identifier.doi | 10.1109/ACCESS.2021.3121543 | - |
dc.identifier.scopusid | 2-s2.0-85118246850 | - |
dc.identifier.wosid | 000714706800001 | - |
dc.identifier.bibliographicCitation | IEEE ACCESS, v.9, pp.146392 - 146403 | - |
dc.relation.isPartOf | IEEE ACCESS | - |
dc.citation.title | IEEE ACCESS | - |
dc.citation.volume | 9 | - |
dc.citation.startPage | 146392 | - |
dc.citation.endPage | 146403 | - |
dc.type.rims | ART | - |
dc.type.docType | Article | - |
dc.description.journalClass | 1 | - |
dc.description.isOpenAccess | Y | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | EngineeringTelecommunications | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.relation.journalWebOfScienceCategory | Telecommunications | - |
dc.subject.keywordPlus | HEART-RATE-VARIABILITY | - |
dc.subject.keywordPlus | SENSOR | - |
dc.subject.keywordAuthor | Wearable computers | - |
dc.subject.keywordAuthor | Emotion recognition | - |
dc.subject.keywordAuthor | Sensors | - |
dc.subject.keywordAuthor | Cameras | - |
dc.subject.keywordAuthor | Biomedical monitoring | - |
dc.subject.keywordAuthor | Glass | - |
dc.subject.keywordAuthor | Motion pictures | - |
dc.subject.keywordAuthor | Wearable device | - |
dc.subject.keywordAuthor | emotion recognition | - |
dc.subject.keywordAuthor | affective computing | - |
dc.subject.keywordAuthor | facial expression | - |
dc.subject.keywordAuthor | biosignal | - |
dc.subject.keywordAuthor | physiological responses | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
222, Wangsimni-ro, Seongdong-gu, Seoul, 04763, Korea+82-2-2220-1365
COPYRIGHT © 2021 HANYANG UNIVERSITY.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.