Generative Neural Networks for Anomaly Detection in Crowded Scenes
- Authors
- Wang, Tian; Qiao, Meina; Lin, Zhiwei; Li, Ce; Snoussi, Hichem; Liu, Zhe; Choi, Chang
- Issue Date
- May-2019
- Publisher
- IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
- Keywords
- Spatio-temporal; anomaly detection; variational autoencoder; loss function
- Citation
- IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, v.14, no.5, pp.1390 - 1399
- Journal Title
- IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY
- Volume
- 14
- Number
- 5
- Start Page
- 1390
- End Page
- 1399
- URI
- https://scholarworks.bwise.kr/gachon/handle/2020.sw.gachon/78575
- DOI
- 10.1109/TIFS.2018.2878538
- ISSN
- 1556-6013
- Abstract
- Security surveillance is critical to social harmony and people's peaceful life. It has a great impact on strengthening social stability and life safeguarding. Detecting anomaly timely, effectively and efficiently in video surveillance remains challenging. This paper proposes a new approach, called S-2-VAE, for anomaly detection from video data. The S-2-VAE consists of two proposed neural networks: a Stacked Fully Connected Variational AutoEncoder (S-F-VAE) and a Skip Convolutional VAE (S-C-VAE). The S-F-VAE is a shallow generative network to obtain a model like Gaussian mixture to fit the distribution of the actual data. The S-C-VAE, as a key component of S(2-)VAE, is a deep generative network to take advantages of CNN, VAE and skip connections. Both S-F-VAE and S-C-VAE are efficient and effective generative networks and they can achieve better performance for detecting both local abnormal events and global abnormal events. The proposed S-2-VAE is evaluated using four public datasets. The experimental results show that the S-2-VAE outperforms the state-of-the-art algorithms. The code is available publicly at https://github.com/tianwangbuaa/.
- Files in This Item
- There are no files associated with this item.
- Appears in
Collections - IT융합대학 > 컴퓨터공학과 > 1. Journal Articles
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.