Emotion and Body Movement: A Comparative Study of Automatic Emotion Recognition Using Body Motions
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Cho, Youngwug | - |
dc.contributor.author | Jung, Myeongul | - |
dc.contributor.author | Kim, Kwanguk | - |
dc.date.accessioned | 2023-02-21T06:04:54Z | - |
dc.date.available | 2023-02-21T06:04:54Z | - |
dc.date.created | 2023-02-08 | - |
dc.date.issued | 2022-10 | - |
dc.identifier.issn | 2771-1102 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/182408 | - |
dc.description.abstract | Emotion recognition through body movement in both real and virtual worlds is an important research topic along with facial expression and voice recognition. Computational methods to recognize emotions based on body movement have been developed to utilize skeletal data and motion capture systems, and 2D and 3D pose estimation methods have recently been proposed. Although each of these methodologies involves advantages and disadvantages, they have not been compared with same data. In this study, we collected seven types of motion data associated with specified emotional states from 25 participants, including happiness, sadness, anger, disgust, fear, surprise, and a neutral emotion. We compared three methodologies, including motion capture, 2D pose estimation, and 3D pose estimation, along with human evaluations as a baseline. The results show that measurement through motion capture showed the highest performance, and the 2D and 3D pose estimation also showed relatively high performance compared to the human evaluators' results. These findings suggest that the existing methodologies can be utilized to perform emotion recognition. | - |
dc.language | 영어 | - |
dc.language.iso | en | - |
dc.publisher | Institute of Electrical and Electronics Engineers Inc. | - |
dc.title | Emotion and Body Movement: A Comparative Study of Automatic Emotion Recognition Using Body Motions | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Kim, Kwanguk | - |
dc.identifier.doi | 10.1109/ISMAR-Adjunct57072.2022.00162 | - |
dc.identifier.scopusid | 2-s2.0-85146051394 | - |
dc.identifier.wosid | 000918030200151 | - |
dc.identifier.bibliographicCitation | Proceedings - 2022 IEEE International Symposium on Mixed and Augmented Reality Adjunct, ISMAR-Adjunct 2022, pp.768 - 771 | - |
dc.relation.isPartOf | Proceedings - 2022 IEEE International Symposium on Mixed and Augmented Reality Adjunct, ISMAR-Adjunct 2022 | - |
dc.citation.title | Proceedings - 2022 IEEE International Symposium on Mixed and Augmented Reality Adjunct, ISMAR-Adjunct 2022 | - |
dc.citation.startPage | 768 | - |
dc.citation.endPage | 771 | - |
dc.type.rims | ART | - |
dc.type.docType | Proceedings Paper | - |
dc.description.journalClass | 1 | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Imaging Science & Photographic Technology | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Cybernetics | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Software Engineering | - |
dc.relation.journalWebOfScienceCategory | Imaging Science & Photographic Technology | - |
dc.subject.keywordPlus | Deep learning | - |
dc.subject.keywordPlus | Human computer interaction | - |
dc.subject.keywordPlus | Speech recognition | - |
dc.subject.keywordPlus | Virtual reality | - |
dc.subject.keywordPlus | Emotion Recognition | - |
dc.subject.keywordPlus | Computing methodologiesartificial intelligence | - |
dc.subject.keywordPlus | Deep learning | - |
dc.subject.keywordPlus | Emotion | - |
dc.subject.keywordPlus | Human-centered computing | - |
dc.subject.keywordPlus | Human-centered computing-human computer interaction interaction paradigmsvirtual reality | - |
dc.subject.keywordPlus | Human-centered computing-human computer interaction interaction paradigm mixed/augmented reality | - |
dc.subject.keywordPlus | Human-centered computing-human computer interaction interaction techniquesgestural input | - |
dc.subject.keywordPlus | Interaction paradigm | - |
dc.subject.keywordPlus | Motion capture | - |
dc.subject.keywordPlus | Pose-estimation | - |
dc.subject.keywordAuthor | Computing methodologiesArtificial intelligence | - |
dc.subject.keywordAuthor | deep learning | - |
dc.subject.keywordAuthor | Emotion | - |
dc.subject.keywordAuthor | Human-centered computing-Human computer interaction (HCI) Interaction paradigmsVirtual reality | - |
dc.subject.keywordAuthor | Human-centered computing-Human computer interaction (HCI)Interaction paradigms Mixed/augmented reality | - |
dc.subject.keywordAuthor | Human-centered computing-Human computer interaction (HCI)Interaction techniquesGestural input | - |
dc.subject.keywordAuthor | motion capture | - |
dc.subject.keywordAuthor | pose estimation | - |
dc.identifier.url | https://ieeexplore.ieee.org/document/9974493 | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
222, Wangsimni-ro, Seongdong-gu, Seoul, 04763, Korea+82-2-2220-1365
COPYRIGHT © 2021 HANYANG UNIVERSITY.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.