Detailed Information

Cited 2 time in webofscience Cited 2 time in scopus
Metadata Downloads

MASCOT: A Quantization Framework for Efficient Matrix Factorization in Recommender Systems

Authors
Ko, YunyongYu, Jae-SeoBae, Hong-KyunPark, YongjunLee, DongwonKim, Sang-Wook
Issue Date
Jan-2022
Publisher
IEEE COMPUTER SOC
Keywords
quantization; matrix factorization; precision switching; recommender systems
Citation
2021 21ST IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2021), v.2021-Decem, pp.290 - 299
Indexed
SCOPUS
Journal Title
2021 21ST IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2021)
Volume
2021-Decem
Start Page
290
End Page
299
URI
https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/139740
DOI
10.1109/ICDM51629.2021.00039
ISSN
1550-4786
Abstract
In recent years, quantization methods have successfully accelerated the training of large deep neural network (DNN) models by reducing the level of precision in computing operations (e.g., forward/backward passes) without sacrificing its accuracy. In this work, therefore, we attempt to apply such a quantization idea to the popular Matrix factorization (MF) methods to deal with the growing scale of models and datasets in recommender systems. However, to our dismay, we observe that the state-of-the-art quantization methods are not effective in the training of MF models, unlike their successes in the training of DNN models. To this phenomenon, we posit that two distinctive features in training MF models could explain the difference: (i) the training of MF models is much more memory-intensive than that of DNN models, and (ii) the quantization errors across users and items in recommendation are not uniform. From these observations, we develop a quantization framework for MF models, named MASCOT, employing novel strategies (i.e., m-quantization and g-switching) to successfully address the aforementioned limitations of quantization in the training of MF models. The comprehensive evaluation using four real-world datasets demonstrates that MASCOT improves the training performance of MF models by about 45%, compared to the training without quantization, while maintaining low model errors, and the strategies and implementation optimizations of MASCOT are quite effective in the training of MF models. For the detailed information about MASCOT, we release the code of MASCOT and the datasets at: https://github.com/Yujaeseo/ICDM 2021_MASCOT.
Files in This Item
Go to Link
Appears in
Collections
서울 공과대학 > 서울 컴퓨터소프트웨어학부 > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Kim, Sang-Wook photo

Kim, Sang-Wook
COLLEGE OF ENGINEERING (SCHOOL OF COMPUTER SCIENCE)
Read more

Altmetrics

Total Views & Downloads

BROWSE