Detailed Information

Cited 1 time in webofscience Cited 1 time in scopus
Metadata Downloads

Joint Representation of Temporal Image Sequences and Object Motion for Video Object Detection

Full metadata record
DC Field Value Language
dc.contributor.authorKoh, Junho-
dc.contributor.authorKim, Jaekyum-
dc.contributor.authorShin, Younji-
dc.contributor.authorLee, Byeongwon-
dc.contributor.authorYang, Seungji-
dc.contributor.authorChoi, Jun Won-
dc.date.accessioned2022-07-06T17:44:08Z-
dc.date.available2022-07-06T17:44:08Z-
dc.date.created2022-05-04-
dc.date.issued2021-05-
dc.identifier.issn1050-4729-
dc.identifier.urihttps://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/141863-
dc.description.abstractIn this paper, we propose a new video object detection (VoD) method, referred to as temporal feature aggregation and motion-aware VoD (TM-VoD), that produces a joint representation of temporal image sequences and object motion. The TM-VoD generates strong spatio-temporal features for VOD by temporally redundant information in an image sequence and the motion context. These are produced at the feature level in the region proposal stage and at the instance level in the refinement stage. In the region proposal stage, visual features are temporally fused with appropriate weights at the pixel level via gated attention model. Furthermore, pixel level motion features are obtained by capturing the changes between adjacent visual feature maps. In the refinement stage, the visual features are aligned and aggregated at the instance level. We propose a novel feature alignment method, which uses the initial region proposals as anchors to predict the box coordinates for all video frames. Moreover, the instance level motion features are obtained by applying the region of interest (RoI) pooling to the pixel level motion features and by encoding the sequential changes in the box coordinates. Finally, all these instance level features are concatenated to produce a joint representation of the objects. Experiments on the ImageNet VID dataset demonstrate that the proposed method significantly outperforms existing VoDs and achieves performance comparable with that of state-of-the-art VoDs.-
dc.language영어-
dc.language.isoen-
dc.publisherIEEE-
dc.titleJoint Representation of Temporal Image Sequences and Object Motion for Video Object Detection-
dc.typeArticle-
dc.contributor.affiliatedAuthorChoi, Jun Won-
dc.identifier.doi10.1109/ICRA48506.2021.9561778-
dc.identifier.scopusid2-s2.0-85123767748-
dc.identifier.wosid000771405405004-
dc.identifier.bibliographicCitation2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), v.2021-May, pp.13370 - 13376-
dc.relation.isPartOf2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021)-
dc.citation.title2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021)-
dc.citation.volume2021-May-
dc.citation.startPage13370-
dc.citation.endPage13376-
dc.type.rimsART-
dc.type.docTypeProceedings Paper-
dc.description.journalClass1-
dc.description.isOpenAccessN-
dc.description.journalRegisteredClassscopus-
dc.relation.journalResearchAreaAutomation & Control Systems-
dc.relation.journalResearchAreaRobotics-
dc.relation.journalWebOfScienceCategoryAutomation & Control Systems-
dc.relation.journalWebOfScienceCategoryRobotics-
dc.identifier.urlhttps://ieeexplore.ieee.org/document/9561778-
Files in This Item
Go to Link
Appears in
Collections
서울 공과대학 > 서울 전기공학전공 > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Choi, Jun Won photo

Choi, Jun Won
COLLEGE OF ENGINEERING (MAJOR IN ELECTRICAL ENGINEERING)
Read more

Altmetrics

Total Views & Downloads

BROWSE