Detailed feature extraction network-based fine-grained face segmentation
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Sabina, Umirzakova | - |
dc.contributor.author | Whangbo, Taeg Keun | - |
dc.date.accessioned | 2022-08-25T00:40:14Z | - |
dc.date.available | 2022-08-25T00:40:14Z | - |
dc.date.created | 2022-07-27 | - |
dc.date.issued | 2022-08 | - |
dc.identifier.issn | 0950-7051 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/gachon/handle/2020.sw.gachon/85282 | - |
dc.description.abstract | Face parsing refers to the labeling of each facial component in a face image and has been employed in facial stimulation, expression recognition, and makeup use, effectively providing a basis for further analysis, computations, animation, modification, and numerous other applications. Although existing face parsing methods have demonstrated good performance, they fail to extract rich features and recover accurate segmentation maps, particularly for faces with high variations in expression and sufficiently similar appearances. Moreover, these approaches neglect the semantic gaps and dependencies between facial categories and their boundaries. To address these drawbacks, we propose an efficient dilated convolution network with different aspect ratios to attain accurate face parsing of the output by applying the feature extraction capability. The proposed network-structured multiscale dilated encoder–decoder convolution model obtains rich component information and efficiently improves the capture of global information by obtaining low- and high-level semantic features. To achieve a delicate parsing output of the face components along the borders and analyze the connections between the face categories and their border edges, the semantic edge map is learned using a conditional random field, which aims to distinguish border and non-border pixels during the modeling. We conducted experiments using three well-known publicly available face databases. The recorded results demonstrate the high accuracy and capacity of the proposed method in comparison to previous state-of-art methods. Our proposed model achieved a mean accuracy of 90% on the CelebAMask-HQ dataset for the category case and 81.43% for the accessory case, and achieved accuracies of 91.58% and 92.44% on the HELEN and LaPa datasets, respectively, thereby demonstrating its effectiveness. © 2022 The Author(s) | - |
dc.language | 영어 | - |
dc.language.iso | en | - |
dc.publisher | Elsevier | - |
dc.relation.isPartOf | Knowledge-Based Systems | - |
dc.title | Detailed feature extraction network-based fine-grained face segmentation | - |
dc.type | Article | - |
dc.type.rims | ART | - |
dc.description.journalClass | 1 | - |
dc.identifier.wosid | 000833286600014 | - |
dc.identifier.doi | 10.1016/j.knosys.2022.109036 | - |
dc.identifier.bibliographicCitation | Knowledge-Based Systems, v.250 | - |
dc.description.isOpenAccess | Y | - |
dc.identifier.scopusid | 2-s2.0-85131221879 | - |
dc.citation.title | Knowledge-Based Systems | - |
dc.citation.volume | 250 | - |
dc.contributor.affiliatedAuthor | Sabina, Umirzakova | - |
dc.contributor.affiliatedAuthor | Whangbo, Taeg Keun | - |
dc.type.docType | Article | - |
dc.subject.keywordAuthor | Conditional random field | - |
dc.subject.keywordAuthor | Dilated convolution | - |
dc.subject.keywordAuthor | Encoder–decoder | - |
dc.subject.keywordAuthor | Face segmentation | - |
dc.subject.keywordAuthor | Multiscale | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
1342, Seongnam-daero, Sujeong-gu, Seongnam-si, Gyeonggi-do, Republic of Korea(13120)031-750-5114
COPYRIGHT 2020 Gachon University All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.