Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Two-stream small-scale pedestrian detection network with feature aggregation for drone-view videos

Authors
Xie, HanShin, Hyunchul
Issue Date
Jul-2021
Publisher
Kluwer Academic Publishers
Keywords
Pedestrian detection; Feature aggregation; Drone vision; Neural network; Deep learning
Citation
Multidimensional Systems and Signal Processing, v.32, no.3, pp 897 - 913
Pages
17
Indexed
SCIE
SCOPUS
Journal Title
Multidimensional Systems and Signal Processing
Volume
32
Number
3
Start Page
897
End Page
913
URI
https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/111271
DOI
10.1007/s11045-021-00764-1
ISSN
0923-6082
1573-0824
Abstract
Detecting small-scale pedestrians in aerial images is a challenging task that can be difficult even for humans. Observing that the single image based method cannot achieve robust performance because of the poor visual cues of small instances. Considering that multiple frames may provide more information to detect such difficult case instead of only single frame, we design a novel video based pedestrian detection method with a two-stream network pipeline to fully utilize the temporal and contextual information of a video. An aggregated feature map is proposed to absorb the spatial and temporal information with the help of spatial and temporal sub-networks. To better capture motion information, a more refined flow net (SPyNet) is adopted instead of a simple flownet. In the spatial stream subnetwork, we modified the backbone network structure by increasing the feature map resolution with relatively larger receptive field to make it suitable for small-scale detection. Experimental results based on drone video datasets demonstrate that our approach improves detection accuracy in the case of small-scale instances and reduces false positive detections. By exploiting the temporal information and aggregating the feature maps, our two-stream method improves the detection performance by 8.48% in mean Average Precision (mAP) from that of the basic single stream R-FCN method, and it outperforms the state-of-the-art method by 3.09% on the Okutama Human-action dataset.
Files in This Item
Go to Link
Appears in
Collections
COLLEGE OF ENGINEERING SCIENCES > SCHOOL OF ELECTRICAL ENGINEERING > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetrics

Total Views & Downloads

BROWSE