Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Occluded Pedestrian-Attribute Recognition for Video Sensors Using Group Sparsity

Full metadata record
DC Field Value Language
dc.contributor.authorLee, Geonu-
dc.contributor.authorYun, Kimin-
dc.contributor.authorCho, Jungchan-
dc.date.accessioned2022-10-12T06:40:08Z-
dc.date.available2022-10-12T06:40:08Z-
dc.date.created2022-09-22-
dc.date.issued2022-09-
dc.identifier.issn1424-8220-
dc.identifier.urihttps://scholarworks.bwise.kr/gachon/handle/2020.sw.gachon/85652-
dc.description.abstractPedestrians are often obstructed by other objects or people in real-world vision sensors. These obstacles make pedestrian-attribute recognition (PAR) difficult; hence, occlusion processing for visual sensing is a key issue in PAR. To address this problem, we first formulate the identification of non-occluded frames as temporal attention based on the sparsity of a crowded video. In other words, a model for PAR is guided to prevent paying attention to the occluded frame. However, we deduced that this approach cannot include a correlation between attributes when occlusion occurs. For example, "boots" and "shoe color" cannot be recognized simultaneously when the foot is invisible. To address the uncorrelated attention issue, we propose a novel temporal-attention module based on group sparsity. Group sparsity is applied across attention weights in correlated attributes. Accordingly, physically-adjacent pedestrian attributes are grouped, and the attention weights of a group are forced to focus on the same frames. Experimental results indicate that the proposed method achieved 1.18% and 6.21% higher F-1-scores than the advanced baseline method on the occlusion samples in DukeMTMC-VideoRelD and MARS video-based PAR datasets, respectively.-
dc.language영어-
dc.language.isoen-
dc.publisherMDPI-
dc.relation.isPartOfSENSORS-
dc.titleOccluded Pedestrian-Attribute Recognition for Video Sensors Using Group Sparsity-
dc.typeArticle-
dc.type.rimsART-
dc.description.journalClass1-
dc.identifier.wosid000851799600001-
dc.identifier.doi10.3390/s22176626-
dc.identifier.bibliographicCitationSENSORS, v.22, no.17-
dc.description.isOpenAccessY-
dc.identifier.scopusid2-s2.0-85137590077-
dc.citation.titleSENSORS-
dc.citation.volume22-
dc.citation.number17-
dc.contributor.affiliatedAuthorLee, Geonu-
dc.contributor.affiliatedAuthorCho, Jungchan-
dc.type.docTypeArticle-
dc.subject.keywordAuthordeep learning-
dc.subject.keywordAuthorgroup-sparsity loss-
dc.subject.keywordAuthortemporal attention module-
dc.subject.keywordAuthorvideo-based pedestrianattribute recognition-
dc.relation.journalResearchAreaChemistry-
dc.relation.journalResearchAreaEngineering-
dc.relation.journalResearchAreaInstruments & Instrumentation-
dc.relation.journalWebOfScienceCategoryChemistry, Analytical-
dc.relation.journalWebOfScienceCategoryEngineering, Electrical & Electronic-
dc.relation.journalWebOfScienceCategoryInstruments & Instrumentation-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
Files in This Item
There are no files associated with this item.
Appears in
Collections
IT융합대학 > 소프트웨어학과 > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Cho, Jung Chan photo

Cho, Jung Chan
College of IT Convergence (Department of Software)
Read more

Altmetrics

Total Views & Downloads

BROWSE