Autoencoder-based personalized ranking framework unifying explicit and implicit feedback for accurate top-N recommendation
- Authors
- Chae, Dong-Kyu; Kim, Sang-Wook; Lee, Jung-Tae
- Issue Date
- Jul-2019
- Publisher
- ELSEVIER
- Keywords
- Collaborative filtering; Top-N recommendation; Deep learning; Autoencoders
- Citation
- KNOWLEDGE-BASED SYSTEMS, v.176, pp.110 - 121
- Indexed
- SCIE
SCOPUS
- Journal Title
- KNOWLEDGE-BASED SYSTEMS
- Volume
- 176
- Start Page
- 110
- End Page
- 121
- URI
- https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/147496
- DOI
- 10.1016/j.knosys.2019.03.026
- ISSN
- 0950-7051
- Abstract
- Existing top-N recommendation models can be classified according to the following two criteria: way of optimization and type of data. In terms of optimization, the models can either minimize the mean squared error (MSE) of rating predictions, which is so-called pointwise learning, or maximize the likelihood of pairwise preferences over more preferred and less preferred items (e.g., rated and unrated items), which is so-called pairwise learning. According to the data type, the models use either explicit feedback or implicit feedback. Most existing models use one of the optimization methods with either explicit or implicit feedback. However, we believe that pairwise learning and pointwise learning (resp. using explicit and implicit feedback) are complementary, thus employing both optimization methods and both forms of data together would bring a synergy effect in recommendation. Along this line, we propose a novel, unified recommendation framework based on deep neural networks, in which the pointwise and pairwise learning are employed together while using both the users' explicit and implicit feedback. The experimental results on four real-life datasets confirm the effectiveness of our proposed framework over the state-of-the-art ones.
- Files in This Item
-
Go to Link
- Appears in
Collections - 서울 공과대학 > 서울 컴퓨터소프트웨어학부 > 1. Journal Articles
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.