Masked Face Emotion Recognition Based on Facial Landmarks and Deep Learning Approaches for Visually Impaired People
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Mukhiddinov, Mukhriddin | - |
dc.contributor.author | Djuraev, Oybek | - |
dc.contributor.author | AKHMEDOV FARKHOD ABDUVALI UGLI | - |
dc.contributor.author | NURALIEVICH, MUKHAMADIYEV ABDINABI | - |
dc.contributor.author | Cho, Jinsoo | - |
dc.date.accessioned | 2023-03-14T07:40:24Z | - |
dc.date.available | 2023-03-14T07:40:24Z | - |
dc.date.issued | 2023-02 | - |
dc.identifier.issn | 1424-8220 | - |
dc.identifier.issn | 1424-3210 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/gachon/handle/2020.sw.gachon/87102 | - |
dc.description.abstract | Current artificial intelligence systems for determining a person's emotions rely heavily on lip and mouth movement and other facial features such as eyebrows, eyes, and the forehead. Furthermore, low-light images are typically classified incorrectly because of the dark region around the eyes and eyebrows. In this work, we propose a facial emotion recognition method for masked facial images using low-light image enhancement and feature analysis of the upper features of the face with a convolutional neural network. The proposed approach employs the AffectNet image dataset, which includes eight types of facial expressions and 420,299 images. Initially, the facial input image's lower parts are covered behind a synthetic mask. Boundary and regional representation methods are used to indicate the head and upper features of the face. Secondly, we effectively adopt a facial landmark detection method-based feature extraction strategy using the partially covered masked face's features. Finally, the features, the coordinates of the landmarks that have been identified, and the histograms of the oriented gradients are then incorporated into the classification procedure using a convolutional neural network. An experimental evaluation shows that the proposed method surpasses others by achieving an accuracy of 69.3% on the AffectNet dataset. | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | MDPI | - |
dc.title | Masked Face Emotion Recognition Based on Facial Landmarks and Deep Learning Approaches for Visually Impaired People | - |
dc.type | Article | - |
dc.identifier.wosid | 000930118800001 | - |
dc.identifier.doi | 10.3390/s23031080 | - |
dc.identifier.bibliographicCitation | SENSORS, v.23, no.3 | - |
dc.description.isOpenAccess | Y | - |
dc.identifier.scopusid | 2-s2.0-85147894261 | - |
dc.citation.title | SENSORS | - |
dc.citation.volume | 23 | - |
dc.citation.number | 3 | - |
dc.type.docType | Article | - |
dc.publisher.location | 스위스 | - |
dc.subject.keywordAuthor | emotion recognition | - |
dc.subject.keywordAuthor | facial landmarks | - |
dc.subject.keywordAuthor | computer vision | - |
dc.subject.keywordAuthor | deep learning | - |
dc.subject.keywordAuthor | convolutional neural network | - |
dc.subject.keywordAuthor | facial expression recognition | - |
dc.subject.keywordAuthor | visually impaired people | - |
dc.subject.keywordPlus | EXPRESSION RECOGNITION | - |
dc.subject.keywordPlus | INFORMATION | - |
dc.subject.keywordPlus | NETWORK | - |
dc.relation.journalResearchArea | Chemistry | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalResearchArea | Instruments & Instrumentation | - |
dc.relation.journalWebOfScienceCategory | Chemistry, Analytical | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.relation.journalWebOfScienceCategory | Instruments & Instrumentation | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
1342, Seongnam-daero, Sujeong-gu, Seongnam-si, Gyeonggi-do, Republic of Korea(13120)031-750-5114
COPYRIGHT 2020 Gachon University All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.