An Exploratory Analysis on Visual Counterfeits Using Conv-LSTM Hybrid Architecture
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Hashmi M.F. | - |
dc.contributor.author | Ashish B.K.K. | - |
dc.contributor.author | Keskar A.G. | - |
dc.contributor.author | Bokde N.D. | - |
dc.contributor.author | Yoon J.H. | - |
dc.contributor.author | Geem Z.W. | - |
dc.date.available | 2020-07-23T00:35:30Z | - |
dc.date.created | 2020-06-29 | - |
dc.date.issued | 2020-05 | - |
dc.identifier.issn | 2169-3536 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/gachon/handle/2020.sw.gachon/68780 | - |
dc.description.abstract | In recent years, with the advancements in the Deep Learning realm, it has been easy to create and generate synthetically the face swaps from GANs and other tools, which are very realistic, leaving few traces which are unclassifiable by human eyes. These are known as 'DeepFakes' and most of them are anchored in video formats. Such realistic fake videos and images are used to create a ruckus and affect the quality of public discourse on sensitive issues; defaming one's profile, political distress, blackmailing and many more fake cyber terrorisms are envisioned. This work proposes a microscopic-typo comparison of video frames. This temporal-detection pipeline compares very minute visual traces on the faces of real and fake frames using Convolutional Neural Network (CNN) and stores the abnormal features for training. A total of 512 facial landmarks were extracted and compared. Parameters such as eye-blinking lip-synch; eyebrows movement, and position, are few main deciding factors that classify into real or counterfeit visual data. The Recurrent Neural Network (RNN) pipeline learns based on these features-fed inputs and then evaluates the visual data. The model was trained with the network of videos consisting of their real and fake, collected from multiple websites. The proposed algorithm and designed network set a new benchmark for detecting the visual counterfeits and show how this system can achieve competitive results on any fake generated video or image. © 2013 IEEE. | - |
dc.language | 영어 | - |
dc.language.iso | en | - |
dc.publisher | Institute of Electrical and Electronics Engineers Inc. | - |
dc.relation.isPartOf | IEEE Access | - |
dc.title | An Exploratory Analysis on Visual Counterfeits Using Conv-LSTM Hybrid Architecture | - |
dc.type | Article | - |
dc.type.rims | ART | - |
dc.description.journalClass | 1 | - |
dc.identifier.wosid | 000546406500017 | - |
dc.identifier.doi | 10.1109/ACCESS.2020.2998330 | - |
dc.identifier.bibliographicCitation | IEEE Access, v.8, pp.101293 - 101308 | - |
dc.description.isOpenAccess | N | - |
dc.identifier.scopusid | 2-s2.0-85086699417 | - |
dc.citation.endPage | 101308 | - |
dc.citation.startPage | 101293 | - |
dc.citation.title | IEEE Access | - |
dc.citation.volume | 8 | - |
dc.contributor.affiliatedAuthor | Geem Z.W. | - |
dc.type.docType | Article | - |
dc.subject.keywordAuthor | Convolutional neural networks (CNN) | - |
dc.subject.keywordAuthor | DeepFakes | - |
dc.subject.keywordAuthor | Facial landmarks | - |
dc.subject.keywordAuthor | Generative adversarial network (GANs) | - |
dc.subject.keywordAuthor | Recurrent neural network (RNN) | - |
dc.subject.keywordAuthor | Visual counterfeits | - |
dc.subject.keywordPlus | Convolutional neural networks | - |
dc.subject.keywordPlus | Deep learning | - |
dc.subject.keywordPlus | Eye movements | - |
dc.subject.keywordPlus | Pipelines | - |
dc.subject.keywordPlus | Cyber terrorism | - |
dc.subject.keywordPlus | Exploratory analysis | - |
dc.subject.keywordPlus | Eye-blinking | - |
dc.subject.keywordPlus | Facial landmark | - |
dc.subject.keywordPlus | Hybrid architectures | - |
dc.subject.keywordPlus | Recurrent neural network (RNN) | - |
dc.subject.keywordPlus | Temporal detection | - |
dc.subject.keywordPlus | Video format | - |
dc.subject.keywordPlus | Long short-term memory | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
1342, Seongnam-daero, Sujeong-gu, Seongnam-si, Gyeonggi-do, Republic of Korea(13120)031-750-5114
COPYRIGHT 2020 Gachon University All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.