Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Convolutional Network with Densely Backward Attention for Facial Expression Recognition

Full metadata record
DC Field Value Language
dc.contributor.authorHua, Cam-Hao-
dc.contributor.authorThien Huynh-The-
dc.contributor.authorSeo, Hyunseok-
dc.contributor.authorLee, Sungyoung-
dc.date.accessioned2022-05-17T04:40:04Z-
dc.date.available2022-05-17T04:40:04Z-
dc.date.created2022-02-08-
dc.date.issued2020-01-
dc.identifier.issn2644-0164-
dc.identifier.urihttps://scholarworks.bwise.kr/kumoh/handle/2020.sw.kumoh/21120-
dc.description.abstractThe emergence of convolutional neural network (CNN) has enabled facial expression recognition to accomplish significant outcomes nowadays. However, while existing multi-stream networks are subject to costly computation, the attention-embedded approaches do not involve multiple levels of semantic context in the predefined CNN. Based on the observation that emotions via a person's face are fusion of various muscular modalities, relying upon the outputs and corresponding attentional features of the deepest layer in the CNN is insufficient due to loss of informative details through multiple sub-sampling stages. Therefore, this paper introduces a CNN with densely backward attention to leverage the aggregation of channel-wise attention at multi-level features in a backbone network for reaching high recognition performance with cost-effective resource consumption. Particularly, cross-channel semantic information in high-level features are exploited densely to recalibrate fine-grained details in low-level versions. Then, a step of multi-level aggregation is further executed for thorougly involving spatial representations of important facial modalities. As a consequence, the proposed approach gains highest mean class accuracy of 79.37% on RAF-DB, which is competitive with the state-of-the-arts.-
dc.language영어-
dc.language.isoen-
dc.publisherIEEE-
dc.titleConvolutional Network with Densely Backward Attention for Facial Expression Recognition-
dc.typeConference-
dc.contributor.affiliatedAuthorThien Huynh-The-
dc.identifier.wosid000568448900013-
dc.identifier.bibliographicCitation14th International Conference on Ubiquitous Information Management and Communication (IMCOM)-
dc.relation.isPartOf14th International Conference on Ubiquitous Information Management and Communication (IMCOM)-
dc.relation.isPartOfPROCEEDINGS OF THE 2020 14TH INTERNATIONAL CONFERENCE ON UBIQUITOUS INFORMATION MANAGEMENT AND COMMUNICATION (IMCOM)-
dc.citation.title14th International Conference on Ubiquitous Information Management and Communication (IMCOM)-
dc.citation.conferencePlaceKO-
dc.citation.conferencePlaceSungkyunkwan Univ, Taichung, TAIWAN-
dc.citation.conferenceDate2020-01-03-
dc.type.rimsCONF-
dc.description.journalClass1-
Files in This Item
There are no files associated with this item.
Appears in
Collections
ETC > 2. Conference Papers

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetrics

Total Views & Downloads

BROWSE