Background Subtraction Using an Adaptive Local Median Texture Feature in Illumination Changes Urban Traffic Scenes
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Zhang, Yunsheng | - |
dc.contributor.author | Zheng, Weibo | - |
dc.contributor.author | Leng, Kaijun | - |
dc.contributor.author | Li, Hao | - |
dc.date.accessioned | 2021-06-16T09:40:46Z | - |
dc.date.available | 2021-06-16T09:40:46Z | - |
dc.date.created | 2021-06-16 | - |
dc.date.issued | 2020-06 | - |
dc.identifier.issn | 2169-3536 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/gachon/handle/2020.sw.gachon/81331 | - |
dc.description.abstract | Background subtraction is commonly employed in foreground object detection in urban traffic scenes. Most of the current color or texture feature-based background subtraction models are easily contaminated by sudden and gradual illumination variations in urban traffic scenes. To resolve this deficiency, an adaptive local median texture feature, which extracts the adaptive distance threshold employing the median information in a predefined local region of a pixel and Weber's law, is introduced. In addition, a sample consensus-based model that evolved from portable visual background extractor is proposed using an adaptive local median texture feature. Then, the foreground is labeled by comparing the input video frames feature with the model. Moreover, to adapt the dynamic background, the random update scheme is used to update the model. Extensive experimental results on the public Change Detection data set of 2014 (CDnet2014) and the real-world urban traffic videos demonstrate that our background subtraction method is superior to the other state-of-the-art texture-feature-based methods. The qualitative and quantitative results show the encouraging efficiency of the proposed technique to deal with sudden and gradual illumination variations in real-world urban traffic scenes. | - |
dc.language | 영어 | - |
dc.language.iso | en | - |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
dc.relation.isPartOf | IEEE ACCESS | - |
dc.title | Background Subtraction Using an Adaptive Local Median Texture Feature in Illumination Changes Urban Traffic Scenes | - |
dc.type | Article | - |
dc.type.rims | ART | - |
dc.description.journalClass | 1 | - |
dc.identifier.wosid | 000552979900001 | - |
dc.identifier.doi | 10.1109/ACCESS.2020.3009104 | - |
dc.identifier.bibliographicCitation | IEEE ACCESS, v.8, pp.130367 - 130378 | - |
dc.description.isOpenAccess | N | - |
dc.citation.endPage | 130378 | - |
dc.citation.startPage | 130367 | - |
dc.citation.title | IEEE ACCESS | - |
dc.citation.volume | 8 | - |
dc.contributor.affiliatedAuthor | Zhang, Yunsheng | - |
dc.type.docType | Article | - |
dc.subject.keywordAuthor | Lighting | - |
dc.subject.keywordAuthor | Adaptation models | - |
dc.subject.keywordAuthor | Feature extraction | - |
dc.subject.keywordAuthor | Computational modeling | - |
dc.subject.keywordAuthor | Biological system modeling | - |
dc.subject.keywordAuthor | Robustness | - |
dc.subject.keywordAuthor | Background modeling | - |
dc.subject.keywordAuthor | illumination variations | - |
dc.subject.keywordAuthor | local median texture feature | - |
dc.subject.keywordAuthor | urban traffic scenes | - |
dc.subject.keywordPlus | MOVING OBJECT DETECTION | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalResearchArea | Telecommunications | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.relation.journalWebOfScienceCategory | Telecommunications | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
1342, Seongnam-daero, Sujeong-gu, Seongnam-si, Gyeonggi-do, Republic of Korea(13120)031-750-5114
COPYRIGHT 2020 Gachon University All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.