Detailed Information

Cited 0 time in webofscience Cited 1 time in scopus
Metadata Downloads

Scarfnet: Multi-scale features with deeply fused and redistributed semantics for enhanced object detectionopen access

Authors
Yoo, Jin HyeokKum, DongsukChoi, Jun Won
Issue Date
Jan-2021
Publisher
Institute of Electrical and Electronics Engineers Inc.
Citation
Proceedings - International Conference on Pattern Recognition, pp.4505 - 4512
Indexed
SCOPUS
Journal Title
Proceedings - International Conference on Pattern Recognition
Start Page
4505
End Page
4512
URI
https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/142433
DOI
10.1109/ICPR48806.2021.9412795
ISSN
1051-4651
Abstract
Convolutional neural networks (CNNs) have led us to achieve significant progress in object detection research. To detect objects of various sizes, object detectors often exploit the hierarchy of the multiscale feature maps called feature pyramids, which are readily obtained by the CNN architecture. However, the performance of these object detectors is limited because the bottom-level feature maps, which experience fewer convolutional layers, lack the semantic information needed to capture the characteristics of the small objects. To address such problems, various methods have been proposed to increase the depth for the bottom-level features used for object detection. While most approaches are based on the generation of additional features through the top-down pathway with lateral connections, our approach directly fuses multi-scale feature maps using bidirectional long short-term memory (biLSTM) in an effort to leverage the gating functions and parameter-sharing in generating deeply fused semantics. The resulting semantic information is redistributed to the individual pyramidal feature at each scale through the channel-wise attention model. We integrate our semantic combining and attentive redistribution feature network (ScarfNet) with the baseline object detectors, i.e., Faster R-CNN, single-shot multibox detector (SSD), and RetinaNet. Experimental results show that our method offers a significant performance gain over the baseline detectors and outperforms the competing multiscale fusion methods in the PASCAL VOC and COCO detection benchmarks.
Files in This Item
Appears in
Collections
서울 공과대학 > 서울 전기공학전공 > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Choi, Jun Won photo

Choi, Jun Won
COLLEGE OF ENGINEERING (MAJOR IN ELECTRICAL ENGINEERING)
Read more

Altmetrics

Total Views & Downloads

BROWSE