Image-Based Learning to Measure the Space Mean Speed on a Stretch of Road without the Need to Tag Images with Labelsopen access
- Authors
- Lee, Jincheol; Roh, Seungbin; Shin, Johyun; Sohn, Keemin
- Issue Date
- Mar-2019
- Publisher
- NLM (Medline)
- Keywords
- space mean speed; convolutional neural network (CNN); cycle-consistent adversarial network (CycleGAN); traffic surveillance; traffic prediction
- Citation
- Sensors (Basel, Switzerland), v.19, no.5
- Journal Title
- Sensors (Basel, Switzerland)
- Volume
- 19
- Number
- 5
- URI
- https://scholarworks.bwise.kr/cau/handle/2019.sw.cau/18519
- DOI
- 10.3390/s19051227
- ISSN
- 1424-8220
1424-8220
- Abstract
- Space mean speed cannot be directly measured in the field, although it is a basic parameter that is used to evaluate traffic conditions. An end-to-end convolutional neural network (CNN) was adopted to measure the space mean speed based solely on two consecutive road images. However, tagging images with labels (=true space mean speeds) by manually positioning and tracking every vehicle on road images is a formidable task. The present study was focused on naïve animation images provided by a traffic simulator, because these contain perfect information concerning vehicle movement to attain labels. The animation images, however, seem far-removed from actual photos taken in the field. A cycle-consistent adversarial network (CycleGAN) bridged the reality gap by mapping the animation images into seemingly realistic images that could not be distinguished from real photos. A CNN model trained on the synthesized images was tested on real photos that had been manually labeled. The test performance was comparable to those of state-of-the-art motion-capture technologies. The proposed method showed that deep-learning models to measure the space mean speed could be trained without the need for time-consuming manual annotation.
- Files in This Item
-
- Appears in
Collections - College of Engineering > ETC > 1. Journal Articles
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.