Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Mitigating Quantization Errors Due to Activation Spikes in Gated Linear Unit-Based Large Language Models

Full metadata record
DC Field Value Language
dc.contributor.authorYang, Jaewoo-
dc.contributor.authorKim, Hayun-
dc.contributor.authorJi, Junyung-
dc.contributor.authorKim, Younghoon-
dc.date.accessioned2025-05-16T07:30:43Z-
dc.date.available2025-05-16T07:30:43Z-
dc.date.issued2025-04-
dc.identifier.issn1999-5903-
dc.identifier.issn1999-5903-
dc.identifier.urihttps://scholarworks.bwise.kr/erica/handle/2021.sw.erica/125234-
dc.description.abstractModern large language models (LLMs) achieve state-of-the-art performance through architectural advancements but require high computational costs for inference. Post-training quantization is a widely adopted approach to reduce these costs by quantizing weights and activations to lower precision, such as INT8. However, we identify a critical challenge in activation quantization for GLU (Gated Linear Unit) variants, which are commonly used in the feed-forward networks of modern LLMs like the LLaMA family. Specifically, severe local quantization errors arise due to excessively large activation magnitudes, which we refer to as activation spikes, leading to significant degradation in model performance. Our analysis reveals a systematic pattern of these spikes: they predominantly occur in the FFN (feed-forward network) layers at the early and late layers of the model and are concentrated on a small subset of tokens rather than being uniformly distributed across a token sequence. To mitigate this issue, we propose two empirical methods: Quantization-free Module (QFeM) and Quantization-free Prefix (QFeP), which isolate activation spikes during quantization. Extensive experiments demonstrated that our methods effectively improve activation quantization, particularly in coarse-grained quantization schemes, enhancing the performance of LLMs with GLU variants and addressing the limitations of existing quantization techniques. The code for implementing our methods and reproducing the experiments is publicly available our GitHub repository.-
dc.format.extent21-
dc.language영어-
dc.language.isoENG-
dc.publisherMDPI-
dc.titleMitigating Quantization Errors Due to Activation Spikes in Gated Linear Unit-Based Large Language Models-
dc.typeArticle-
dc.publisher.location스위스-
dc.identifier.doi10.3390/fi17040185-
dc.identifier.scopusid2-s2.0-105003621632-
dc.identifier.wosid001475065400001-
dc.identifier.bibliographicCitationFUTURE INTERNET, v.17, no.4, pp 1 - 21-
dc.citation.titleFUTURE INTERNET-
dc.citation.volume17-
dc.citation.number4-
dc.citation.startPage1-
dc.citation.endPage21-
dc.type.docTypeArticle-
dc.description.isOpenAccessY-
dc.description.journalRegisteredClassscopus-
dc.description.journalRegisteredClassesci-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalWebOfScienceCategoryComputer Science, Information Systems-
dc.subject.keywordAuthorquantization-
dc.subject.keywordAuthorLLM-
dc.subject.keywordAuthorpost-training quantization-
dc.subject.keywordAuthoroutliers-
dc.identifier.urlhttps://www.mdpi.com/1999-5903/17/4/185-
Files in This Item
Go to Link
Appears in
Collections
COLLEGE OF COMPUTING > DEPARTMENT OF ARTIFICIAL INTELLIGENCE > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Kim, Young hoon photo

Kim, Young hoon
ERICA 소프트웨어융합대학 (DEPARTMENT OF ARTIFICIAL INTELLIGENCE)
Read more

Altmetrics

Total Views & Downloads

BROWSE