GraNet 기반의 필터 프루닝을 적용한 경량 모델의 양자화 효과에 대한 연구
DC Field | Value | Language |
---|---|---|
dc.contributor.author | 설광수 | - |
dc.contributor.author | 노시동 | - |
dc.contributor.author | 정기석 | - |
dc.date.accessioned | 2023-08-01T06:53:22Z | - |
dc.date.available | 2023-08-01T06:53:22Z | - |
dc.date.created | 2023-07-21 | - |
dc.date.issued | 2022-11 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/188562 | - |
dc.description.abstract | As convolutional neural networks get deeper and wider, model compression is being widely used to reduce the amount of computation and memory usage. Pruning, which includes structured pruning and unstructured pruning, is one of the widely-adopted model compression methods. The structured pruning can reduce the size of the network model by model thinning, but it may suffer from worse accuracy degradation than the unstructured method. In this study, we claim that if quantization is used in conjunction with the structured pruning, the data size can be reduced without significantly sacrificing the model's performance. We propose a lightweight model on which both the GraNet structured pruning and an 8-bit weight quantization are applied. We evaluate the performance of both static and dynamic quantization to quantize the pruned model. The experiment was conducted to perform image classification tasks using the ResNet18 model with pruning and quantization on CIFAR-100 datasets. Compared to the original model, we reduced the weight size of the model by 84.25%, 88%, and 96.25% with constraints of 2.5%, 5%, and 10% accuracy degradation using GraNet filter pruning and 8-bit quantization. | - |
dc.language | 한국어 | - |
dc.language.iso | ko | - |
dc.publisher | 대한임베디드공학회 | - |
dc.title | GraNet 기반의 필터 프루닝을 적용한 경량 모델의 양자화 효과에 대한 연구 | - |
dc.title.alternative | A Study of Quantization Effect on a Lightweight Model with GraNet Filter Pruning | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | 정기석 | - |
dc.identifier.bibliographicCitation | 2022 대한임베디드공학회 추계학술대회, v.0, no.0, pp.296 - 299 | - |
dc.relation.isPartOf | 2022 대한임베디드공학회 추계학술대회 | - |
dc.citation.title | 2022 대한임베디드공학회 추계학술대회 | - |
dc.citation.volume | 0 | - |
dc.citation.number | 0 | - |
dc.citation.startPage | 296 | - |
dc.citation.endPage | 299 | - |
dc.type.rims | ART | - |
dc.type.docType | Proceeding | - |
dc.description.journalClass | 3 | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | other | - |
dc.subject.keywordAuthor | Deep learning | - |
dc.subject.keywordAuthor | Model compresion | - |
dc.subject.keywordAuthor | Pruning | - |
dc.subject.keywordAuthor | Quantization | - |
dc.identifier.url | http://esoc.hanyang.ac.kr/publications/2022/GraNet%20%EA%B8%B0%EB%B0%98%EC%9D%98%20%ED%95%84%ED%84%B0%20%ED%94%84%EB%A3%A8%EB%8B%9D%EC%9D%84%20%EC%A0%81%EC%9A%A9%ED%95%9C%20%EA%B2%BD%EB%9F%89%20%EB%AA%A8%EB%8D%B8%EC%9D%98%20%EC%96%91%EC%9E%90%ED%99%94%20%ED%9A%A8%EA%B3%BC%EC%97%90%20%EB%8C%80%ED%95%9C%20%EC%97%B0%EA%B5%AC.pdf | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
222, Wangsimni-ro, Seongdong-gu, Seoul, 04763, Korea+82-2-2220-1365
COPYRIGHT © 2021 HANYANG UNIVERSITY.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.