Action Recognition Network Using Stacked Short-Term Deep Features and Bidirectional Moving Average
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Ha, Jinsol | - |
dc.contributor.author | Shin, Joongchol | - |
dc.contributor.author | Park, Hasil | - |
dc.contributor.author | Paik, Joonki | - |
dc.date.accessioned | 2021-08-13T05:40:14Z | - |
dc.date.available | 2021-08-13T05:40:14Z | - |
dc.date.issued | 2021-06 | - |
dc.identifier.issn | 2076-3417 | - |
dc.identifier.issn | 2076-3417 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/cau/handle/2019.sw.cau/48329 | - |
dc.description.abstract | Action recognition requires the accurate analysis of action elements in the form of a video clip and a properly ordered sequence of the elements. To solve the two sub-problems, it is necessary to learn both spatio-temporal information and the temporal relationship between different action elements. Existing convolutional neural network (CNN)-based action recognition methods have focused on learning only spatial or temporal information without considering the temporal relation between action elements. In this paper, we create short-term pixel-difference images from the input video, and take the difference images as an input to a bidirectional exponential moving average sub-network to analyze the action elements and their temporal relations. The proposed method consists of: (i) generation of RGB and differential images, (ii) extraction of deep feature maps using an image classification sub-network, (iii) weight assignment to extracted feature maps using a bidirectional, exponential, moving average sub-network, and (iv) late fusion with a three-dimensional convolutional (C3D) sub-network to improve the accuracy of action recognition. Experimental results show that the proposed method achieves a higher performance level than existing baseline methods. In addition, the proposed action recognition network takes only 0.075 seconds per action class, which guarantees various high-speed or real-time applications, such as abnormal action classification, human-computer interaction, and intelligent visual surveillance. | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | MDPI | - |
dc.title | Action Recognition Network Using Stacked Short-Term Deep Features and Bidirectional Moving Average | - |
dc.type | Article | - |
dc.identifier.doi | 10.3390/app11125563 | - |
dc.identifier.bibliographicCitation | APPLIED SCIENCES-BASEL, v.11, no.12 | - |
dc.description.isOpenAccess | Y | - |
dc.identifier.wosid | 000666453600001 | - |
dc.identifier.scopusid | 2-s2.0-85108894883 | - |
dc.citation.number | 12 | - |
dc.citation.title | APPLIED SCIENCES-BASEL | - |
dc.citation.volume | 11 | - |
dc.type.docType | Article | - |
dc.publisher.location | 스위스 | - |
dc.subject.keywordAuthor | action recognition | - |
dc.subject.keywordAuthor | three-dimensional convolution (C3D) | - |
dc.subject.keywordAuthor | short-term pixel-difference | - |
dc.subject.keywordAuthor | bidirectional moving average | - |
dc.relation.journalResearchArea | Chemistry | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalResearchArea | Materials Science | - |
dc.relation.journalResearchArea | Physics | - |
dc.relation.journalWebOfScienceCategory | Chemistry, Multidisciplinary | - |
dc.relation.journalWebOfScienceCategory | Engineering, Multidisciplinary | - |
dc.relation.journalWebOfScienceCategory | Materials Science, Multidisciplinary | - |
dc.relation.journalWebOfScienceCategory | Physics, Applied | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
84, Heukseok-ro, Dongjak-gu, Seoul, Republic of Korea (06974)02-820-6194
COPYRIGHT 2019 Chung-Ang University All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.