Light-FER: A Lightweight Facial Emotion Recognition System on Edge Devices
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Pascual, Alexander M. M. | - |
dc.contributor.author | Valverde, Erick C. C. | - |
dc.contributor.author | Kim, Jeong-in | - |
dc.contributor.author | Jeong, Jin-Woo | - |
dc.contributor.author | Jung, Yuchul | - |
dc.contributor.author | Kim, Sang-Ho | - |
dc.contributor.author | Lim, Wansu | - |
dc.date.accessioned | 2023-12-11T20:00:57Z | - |
dc.date.available | 2023-12-11T20:00:57Z | - |
dc.date.issued | 2022-12 | - |
dc.identifier.issn | 1424-8220 | - |
dc.identifier.issn | 1424-3210 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/kumoh/handle/2020.sw.kumoh/26433 | - |
dc.description.abstract | Facial emotion recognition (FER) systems are imperative in recent advanced artificial intelligence (AI) applications to realize better human-computer interactions. Most deep learning-based FER systems have issues with low accuracy and high resource requirements, especially when deployed on edge devices with limited computing resources and memory. To tackle these problems, a lightweight FER system, called Light-FER, is proposed in this paper, which is obtained from the Xception model through model compression. First, pruning is performed during the network training to remove the less important connections within the architecture of Xception. Second, the model is quantized to half-precision format, which could significantly reduce its memory consumption. Third, different deep learning compilers performing several advanced optimization techniques are benchmarked to further accelerate the inference speed of the FER system. Lastly, to experimentally demonstrate the objectives of the proposed system on edge devices, Light-FER is deployed on NVIDIA Jetson Nano. | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | MDPI | - |
dc.title | Light-FER: A Lightweight Facial Emotion Recognition System on Edge Devices | - |
dc.type | Article | - |
dc.publisher.location | 스위스 | - |
dc.identifier.doi | 10.3390/s22239524 | - |
dc.identifier.scopusid | 2-s2.0-85143517346 | - |
dc.identifier.wosid | 000897519100001 | - |
dc.identifier.bibliographicCitation | SENSORS, v.22, no.23 | - |
dc.citation.title | SENSORS | - |
dc.citation.volume | 22 | - |
dc.citation.number | 23 | - |
dc.type.docType | Article | - |
dc.description.isOpenAccess | Y | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Chemistry | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalResearchArea | Instruments & Instrumentation | - |
dc.relation.journalWebOfScienceCategory | Chemistry, Analytical | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.relation.journalWebOfScienceCategory | Instruments & Instrumentation | - |
dc.subject.keywordAuthor | edge device | - |
dc.subject.keywordAuthor | facial emotion recognition | - |
dc.subject.keywordAuthor | model compression | - |
dc.subject.keywordAuthor | Xception | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
350-27, Gumi-daero, Gumi-si, Gyeongsangbuk-do, Republic of Korea (39253)054-478-7170
COPYRIGHT 2020 Kumoh University All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.