Detailed Information

Cited 2 time in webofscience Cited 4 time in scopus
Metadata Downloads

Detailed feature extraction network-based fine-grained face segmentationopen access

Authors
Sabina, UmirzakovaWhangbo, Taeg Keun
Issue Date
Aug-2022
Publisher
Elsevier
Keywords
Conditional random field; Dilated convolution; Encoder–decoder; Face segmentation; Multiscale
Citation
Knowledge-Based Systems, v.250
Journal Title
Knowledge-Based Systems
Volume
250
URI
https://scholarworks.bwise.kr/gachon/handle/2020.sw.gachon/85282
DOI
10.1016/j.knosys.2022.109036
ISSN
0950-7051
Abstract
Face parsing refers to the labeling of each facial component in a face image and has been employed in facial stimulation, expression recognition, and makeup use, effectively providing a basis for further analysis, computations, animation, modification, and numerous other applications. Although existing face parsing methods have demonstrated good performance, they fail to extract rich features and recover accurate segmentation maps, particularly for faces with high variations in expression and sufficiently similar appearances. Moreover, these approaches neglect the semantic gaps and dependencies between facial categories and their boundaries. To address these drawbacks, we propose an efficient dilated convolution network with different aspect ratios to attain accurate face parsing of the output by applying the feature extraction capability. The proposed network-structured multiscale dilated encoder–decoder convolution model obtains rich component information and efficiently improves the capture of global information by obtaining low- and high-level semantic features. To achieve a delicate parsing output of the face components along the borders and analyze the connections between the face categories and their border edges, the semantic edge map is learned using a conditional random field, which aims to distinguish border and non-border pixels during the modeling. We conducted experiments using three well-known publicly available face databases. The recorded results demonstrate the high accuracy and capacity of the proposed method in comparison to previous state-of-art methods. Our proposed model achieved a mean accuracy of 90% on the CelebAMask-HQ dataset for the category case and 81.43% for the accessory case, and achieved accuracies of 91.58% and 92.44% on the HELEN and LaPa datasets, respectively, thereby demonstrating its effectiveness. © 2022 The Author(s)
Files in This Item
There are no files associated with this item.
Appears in
Collections
IT융합대학 > 컴퓨터공학과 > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Umirzakova, Sabina photo

Umirzakova, Sabina
College of IT Convergence (컴퓨터공학부(컴퓨터공학전공))
Read more

Altmetrics

Total Views & Downloads

BROWSE