Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

NN compactor: Minimizing memory and logic resources for small neural networks

Full metadata record
DC Field Value Language
dc.contributor.authorHong, Seongmin-
dc.contributor.authorLee, Inho-
dc.contributor.authorPark, Yongjun-
dc.date.accessioned2022-07-12T00:47:51Z-
dc.date.available2022-07-12T00:47:51Z-
dc.date.created2021-05-13-
dc.date.issued2018-04-
dc.identifier.urihttps://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/150229-
dc.description.abstractSpecial neural accelerators are an appealing hardware platform for machine learning systems because they provide both high performance and energy efficiency. Although various neural accelerators have recently been introduced, they are difficult to adapt to embedded platforms because current neural accelerators require high memory capacity and bandwidth for the fast preparation of synaptic weights. Embedded platforms are often unable to meet these memory requirements because of their limited resources. In FPGA-based IoT (internet of things) systems, the problem becomes even worse since computation units generated from logic blocks cannot be fully utilized due to the small size of block memory. In order to overcome this problem, we propose a novel dual-track quantization technique to reduce synaptic weight width based on the magnitude of the value while minimizing accuracy loss. In this value-adaptive technique, large and small value weights are quantized differently. In this paper, we present a fully automatic framework called NN Compactor that generates a compact neural accelerator by minimizing the memory requirements of synaptic weights through dual-track quantization and minimizing the logic requirements of PUs with minimum recognition accuracy loss. For the three widely used datasets of MNIST, CNAE-9, and Forest, experimental results demonstrate that our compact neural accelerator achieves an average performance improvement of 6.4χ over a baseline embedded system using minimal resources with minimal accuracy loss.-
dc.language영어-
dc.language.isoen-
dc.publisherInstitute of Electrical and Electronics Engineers Inc.-
dc.titleNN compactor: Minimizing memory and logic resources for small neural networks-
dc.typeArticle-
dc.contributor.affiliatedAuthorPark, Yongjun-
dc.identifier.doi10.23919/DATE.2018.8342074-
dc.identifier.scopusid2-s2.0-85048760007-
dc.identifier.bibliographicCitationProceedings of the 2018 Design, Automation and Test in Europe Conference and Exhibition, DATE 2018, v.2018-January, pp.581 - 584-
dc.relation.isPartOfProceedings of the 2018 Design, Automation and Test in Europe Conference and Exhibition, DATE 2018-
dc.citation.titleProceedings of the 2018 Design, Automation and Test in Europe Conference and Exhibition, DATE 2018-
dc.citation.volume2018-January-
dc.citation.startPage581-
dc.citation.endPage584-
dc.type.rimsART-
dc.type.docTypeConference Paper-
dc.description.journalClass1-
dc.description.isOpenAccessN-
dc.description.journalRegisteredClassscopus-
dc.subject.keywordPlusAcceleration-
dc.subject.keywordPlusAutomation-
dc.subject.keywordPlusComputation theory-
dc.subject.keywordPlusEnergy efficiency-
dc.subject.keywordPlusInternet of things-
dc.subject.keywordPlusLearning systems-
dc.subject.keywordPlusNeural networks-
dc.subject.keywordPlusParticle accelerators-
dc.subject.keywordPlusAdaptive technique-
dc.subject.keywordPlusEmbedded platforms-
dc.subject.keywordPlusHardware platform-
dc.subject.keywordPlusLogic resources-
dc.subject.keywordPlusMemory requirements-
dc.subject.keywordPlusPerformance improvements-
dc.subject.keywordPlusQuantization-
dc.subject.keywordPlusRecognition accuracy-
dc.subject.keywordPlusComputer circuits-
dc.subject.keywordAuthorAccelerator-
dc.subject.keywordAuthorAutomation-
dc.subject.keywordAuthorNeural networks-
dc.subject.keywordAuthorQuantization-
dc.identifier.urlhttps://ieeexplore.ieee.org/document/8342074-
Files in This Item
Go to Link
Appears in
Collections
서울 공과대학 > 서울 컴퓨터소프트웨어학부 > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Park, Yong jun photo

Park, Yong jun
서울 공과대학 (서울 컴퓨터소프트웨어학부)
Read more

Altmetrics

Total Views & Downloads

BROWSE