Supervised group embedding for rumor detection in social media
- Authors
- Liu, Yuwei; Chen, Xingming; Rao, Yanghui; Xie, Haoran; Li, Qing; Zhang, Jun; Zhao, Yingchao; Wang, Fu Lee
- Issue Date
- Jun-2019
- Publisher
- Springer Verlag
- Keywords
- Convolutional Neural Network; Rumor detection; Social media
- Citation
- Web Engineering 19th International Conference, ICWE 2019, Daejeon, South Korea, June 11–14, 2019, Proceedings, v.11496, pp 139 - 153
- Pages
- 15
- Indexed
- SCI
SCOPUS
- Journal Title
- Web Engineering 19th International Conference, ICWE 2019, Daejeon, South Korea, June 11–14, 2019, Proceedings
- Volume
- 11496
- Start Page
- 139
- End Page
- 153
- URI
- https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/115845
- DOI
- 10.1007/978-3-030-19274-7_11
- Abstract
- To detect rumors automatically in social media, methods based on recurrent neural network and convolutional neural network have been proposed. These methods split a stream of posts related to an event into several groups along time, and represent each group using unsupervised methods such as paragraph vector. However, many posts in a group (e.g., retweeted posts) do not contribute much to rumor detection, which deteriorates the performance of rumor detection based on unsupervised group embedding. In this paper, we propose a Supervised Group Embedding based Rumor Detection (SGERD) model that considers both textual and temporal information. Particularly, SGERD exploits post-level textual information to generate group embeddings, and is able to identify salient posts for further analysis. Experimental results on two real-world datasets demonstrate the effectiveness of our proposed model. © Springer Nature Switzerland AG 2019.
- Files in This Item
-
Go to Link
- Appears in
Collections - COLLEGE OF ENGINEERING SCIENCES > SCHOOL OF ELECTRICAL ENGINEERING > 1. Journal Articles
![qrcode](https://api.qrserver.com/v1/create-qr-code/?size=55x55&data=https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/115845)
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.