Detailed Information

Cited 2 time in webofscience Cited 2 time in scopus
Metadata Downloads

CENNA: Cost-Effective Neural Network Accelerator

Full metadata record
DC Field Value Language
dc.contributor.authorPark, Sang-Soo-
dc.contributor.authorChung, Ki Seok-
dc.date.accessioned2021-07-30T04:54:52Z-
dc.date.available2021-07-30T04:54:52Z-
dc.date.created2021-05-12-
dc.date.issued2020-01-
dc.identifier.issn2079-9292-
dc.identifier.urihttps://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/2092-
dc.description.abstractConvolutional neural networks (CNNs) are widely adopted in various applications. State-of-the-art CNN models deliver excellent classification performance, but they require a large amount of computation and data exchange because they typically employ many processing layers. Among these processing layers, convolution layers, which carry out many multiplications and additions, account for a major portion of computation and memory access. Therefore, reducing the amount of computation and memory access is the key for high-performance CNNs. In this study, we propose a cost-effective neural network accelerator, named CENNA, whose hardware cost is reduced by employing a cost-centric matrix multiplication that employs both Strassen's multiplication and a naive multiplication. Furthermore, the convolution method using the proposed matrix multiplication can minimize data movement by reusing both the feature map and the convolution kernel without any additional control logic. In terms of throughput, power consumption, and silicon area, the efficiency of CENNA is up to 88 times higher than that of conventional designs for the CNN inference.-
dc.language영어-
dc.language.isoen-
dc.publisherMDPI-
dc.titleCENNA: Cost-Effective Neural Network Accelerator-
dc.typeArticle-
dc.contributor.affiliatedAuthorChung, Ki Seok-
dc.identifier.doi10.3390/electronics9010134-
dc.identifier.scopusid2-s2.0-85078249288-
dc.identifier.wosid000516827000134-
dc.identifier.bibliographicCitationELECTRONICS, v.9, no.1, pp.1 - 19-
dc.relation.isPartOfELECTRONICS-
dc.citation.titleELECTRONICS-
dc.citation.volume9-
dc.citation.number1-
dc.citation.startPage1-
dc.citation.endPage19-
dc.type.rimsART-
dc.type.docTypeArticle-
dc.description.journalClass1-
dc.description.isOpenAccessY-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalResearchAreaEngineering-
dc.relation.journalResearchAreaPhysics-
dc.relation.journalWebOfScienceCategoryComputer Science, Information Systems-
dc.relation.journalWebOfScienceCategoryEngineering, Electrical & Electronic-
dc.relation.journalWebOfScienceCategoryPhysics, Applied-
dc.subject.keywordAuthorconvolutional neural network (CNN)-
dc.subject.keywordAuthorneural network accelerator-
dc.subject.keywordAuthorneural processing unit (NPU)-
dc.subject.keywordAuthorCNN inference-
dc.identifier.urlhttps://www.mdpi.com/2079-9292/9/1/134-
Files in This Item
Appears in
Collections
서울 공과대학 > 서울 융합전자공학부 > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Chung, Ki Seok photo

Chung, Ki Seok
COLLEGE OF ENGINEERING (SCHOOL OF ELECTRONIC ENGINEERING)
Read more

Altmetrics

Total Views & Downloads

BROWSE