Detailed Information

Cited 75 time in webofscience Cited 0 time in scopus
Metadata Downloads

Generative Neural Networks for Anomaly Detection in Crowded Scenes

Full metadata record
DC Field Value Language
dc.contributor.authorWang, Tian-
dc.contributor.authorQiao, Meina-
dc.contributor.authorLin, Zhiwei-
dc.contributor.authorLi, Ce-
dc.contributor.authorSnoussi, Hichem-
dc.contributor.authorLiu, Zhe-
dc.contributor.authorChoi, Chang-
dc.date.available2020-10-20T06:44:21Z-
dc.date.created2020-06-10-
dc.date.issued2019-05-
dc.identifier.issn1556-6013-
dc.identifier.urihttps://scholarworks.bwise.kr/gachon/handle/2020.sw.gachon/78575-
dc.description.abstractSecurity surveillance is critical to social harmony and people's peaceful life. It has a great impact on strengthening social stability and life safeguarding. Detecting anomaly timely, effectively and efficiently in video surveillance remains challenging. This paper proposes a new approach, called S-2-VAE, for anomaly detection from video data. The S-2-VAE consists of two proposed neural networks: a Stacked Fully Connected Variational AutoEncoder (S-F-VAE) and a Skip Convolutional VAE (S-C-VAE). The S-F-VAE is a shallow generative network to obtain a model like Gaussian mixture to fit the distribution of the actual data. The S-C-VAE, as a key component of S(2-)VAE, is a deep generative network to take advantages of CNN, VAE and skip connections. Both S-F-VAE and S-C-VAE are efficient and effective generative networks and they can achieve better performance for detecting both local abnormal events and global abnormal events. The proposed S-2-VAE is evaluated using four public datasets. The experimental results show that the S-2-VAE outperforms the state-of-the-art algorithms. The code is available publicly at https://github.com/tianwangbuaa/.-
dc.language영어-
dc.language.isoen-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.relation.isPartOfIEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY-
dc.titleGenerative Neural Networks for Anomaly Detection in Crowded Scenes-
dc.typeArticle-
dc.type.rimsART-
dc.description.journalClass1-
dc.identifier.wosid000457798900005-
dc.identifier.doi10.1109/TIFS.2018.2878538-
dc.identifier.bibliographicCitationIEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, v.14, no.5, pp.1390 - 1399-
dc.description.isOpenAccessN-
dc.citation.endPage1399-
dc.citation.startPage1390-
dc.citation.titleIEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY-
dc.citation.volume14-
dc.citation.number5-
dc.contributor.affiliatedAuthorChoi, Chang-
dc.type.docTypeArticle-
dc.subject.keywordAuthorSpatio-temporal-
dc.subject.keywordAuthoranomaly detection-
dc.subject.keywordAuthorvariational autoencoder-
dc.subject.keywordAuthorloss function-
dc.subject.keywordPlusABNORMAL EVENT DETECTION-
dc.subject.keywordPlusLOCALIZATION-
dc.subject.keywordPlusRECOGNITION-
dc.subject.keywordPlusMODEL-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalResearchAreaEngineering-
dc.relation.journalWebOfScienceCategoryComputer Science, Theory & Methods-
dc.relation.journalWebOfScienceCategoryEngineering, Electrical & Electronic-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
Files in This Item
There are no files associated with this item.
Appears in
Collections
IT융합대학 > 컴퓨터공학과 > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Choi, Chang photo

Choi, Chang
College of IT Convergence (컴퓨터공학부(컴퓨터공학전공))
Read more

Altmetrics

Total Views & Downloads

BROWSE