Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Cryptensor: A Resource-Shared Co-Processor to Accelerate Convolutional Neural Network and Polynomial Convolution

Authors
See, Jin-ChuanNg, Hui-FuangTan, Hung-KhoonChang, Jing-JingMok, Kai-MingLee, Wai-KongLin, Chih-Yang
Issue Date
Dec-2023
Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Keywords
Convolutional neural network (CNN); cryptography; field programmable gate array (FPGA); generic-matrix-multiplication (GEMM); polynomial convolution; ResNet-18; systolic tensor array (STA); VGG-16
Citation
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, v.42, no.12, pp 4735 - 4748
Pages
14
Journal Title
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS
Volume
42
Number
12
Start Page
4735
End Page
4748
URI
https://scholarworks.bwise.kr/gachon/handle/2020.sw.gachon/90756
DOI
10.1109/TCAD.2023.3296375
ISSN
0278-0070
1937-4151
Abstract
Practical deployment of convolutional neural network (CNN) and cryptography algorithm on constrained devices are challenging due to the huge computation and memory requirement. Developing separate hardware accelerator for AI and cryptography incur large area consumption, which is not desirable in many applications. This article proposes a viable solution to this issue by expressing the CNN and cryptography as generic-matrix-multiplication (GEMM) operations and map them to the same accelerator for reduced hardware consumption. A novel systolic tensor array (STA) design was proposed to reduce the data movement, effectively reducing the operand registers by 2x. Two novel techniques, input layer extension and polynomial factorization, are proposed to mitigate the under-utilization issue found in existing STA architecture. Additionally, the tensor processing element (TPE) is fused using DSP unit to reduce the look-up table (LUT) and flip-flops (FFs) consumption for implementing multipliers. On top of that, a novel memory efficient factorization technique is proposed to allow computation of polynomial convolution on the same STA. Experimental results show that Cryptensor achieved 21.6% better throughput for VGG-16 implementation on XC7Z020 FPGA; up to 8.40x better-energy efficiency compared to existing ResNet-18 implementation on XC7Z045 FPGA. Cryptensor can also flexibly support multiple security levels in NTRU scheme, with no additional hardware. The proposed hardware unifies the computation of two different domains that are critical for IoT applications, which greatly reduces the hardware consumption on edge nodes.
Files in This Item
There are no files associated with this item.
Appears in
Collections
ETC > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetrics

Total Views & Downloads

BROWSE