Role of Zoning in Facial Expression Using Deep Learning
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Shahzad, Taimur | - |
dc.contributor.author | Iqbal, Khalid | - |
dc.contributor.author | Khan, Murad ALi | - |
dc.contributor.author | Imran, | - |
dc.contributor.author | Iqbal, Naeem | - |
dc.date.accessioned | 2023-03-16T01:40:13Z | - |
dc.date.available | 2023-03-16T01:40:13Z | - |
dc.date.created | 2023-02-22 | - |
dc.date.issued | 2023-02 | - |
dc.identifier.issn | 2169-3536 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/gachon/handle/2020.sw.gachon/87142 | - |
dc.description.abstract | Facial expression is an unspoken message essential to collaboration and effective discourse. An inner emotional state of a human is expressed using facial expressions and is very effective for communication with actual emotions. Anger, happiness, sadness, contempt, surprise, fear, disgust, and neutral are eight common expressions of humans. Scientific community proposed several face emotion recognition techniques. However, due to fewer face landmarks and their intensity for deep learning models, performance improvement for facial expression recognition still needs to be improved for accurately predicting facial emotion recognition. This study proposes a zoning-based face expression recognition (ZFER) to locate more face landmarks to perceive deep face emotions indemnity through zoning. After face extraction, landmarks from the face, such as the eyes, eyebrows, nose, forehead, and mouth, are extracted. The second step is zoning each landmark into four regions and zone-based face landmarks are passed to the VGG-16 model to generate a feature map. Finally, the feature map is given as input to fully connected neural network (FCNN) to classify facial emotions into multiple classes. Various experiments are performed on facial expression recognition (FER) 2013 and CK+ datasets to evaluate our proposed model with state-of-the-art facial expression recognition approaches using performance assessment metrics like accuracy. The accuracy of the proposed method with face features on CK+ and FER2013 are 98.4% and 65%, respectively. The experimental zoning results improve from 98.47% to 98.74% on the CK+ dataset. | - |
dc.language | 영어 | - |
dc.language.iso | en | - |
dc.publisher | Institute of Electrical and Electronics Engineers (IEEE) | - |
dc.relation.isPartOf | IEEE Access | - |
dc.title | Role of Zoning in Facial Expression Using Deep Learning | - |
dc.type | Article | - |
dc.type.rims | ART | - |
dc.description.journalClass | 1 | - |
dc.identifier.wosid | 000937112300001 | - |
dc.identifier.doi | 10.1109/access.2023.3243850 | - |
dc.identifier.bibliographicCitation | IEEE Access, v.11, pp.16493 - 16508 | - |
dc.description.isOpenAccess | N | - |
dc.identifier.scopusid | 2-s2.0-85149171367 | - |
dc.citation.endPage | 16508 | - |
dc.citation.startPage | 16493 | - |
dc.citation.title | IEEE Access | - |
dc.citation.volume | 11 | - |
dc.contributor.affiliatedAuthor | Imran, | - |
dc.type.docType | Article | - |
dc.subject.keywordAuthor | CK+ | - |
dc.subject.keywordAuthor | Computer vision | - |
dc.subject.keywordAuthor | deep learning | - |
dc.subject.keywordAuthor | FER2013 | - |
dc.subject.keywordAuthor | zoning | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalResearchArea | Telecommunications | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.relation.journalWebOfScienceCategory | Telecommunications | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
1342, Seongnam-daero, Sujeong-gu, Seongnam-si, Gyeonggi-do, Republic of Korea(13120)031-750-5114
COPYRIGHT 2020 Gachon University All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.