Improved Human-Object Interaction Detection Through On-the-Fly Stacked Generalization
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lee, Geonu | - |
dc.contributor.author | Yun, Kimin | - |
dc.contributor.author | Cho, Jungchan | - |
dc.date.available | 2021-03-15T01:40:33Z | - |
dc.date.created | 2021-03-15 | - |
dc.date.issued | 2021-02 | - |
dc.identifier.issn | 2169-3536 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/gachon/handle/2020.sw.gachon/80408 | - |
dc.description.abstract | Human-object interaction (HOI) detection, which finds the relationships between humans and objects, is an important research area, but current HOI detection performance is unsatisfactory. One of the main problems is that CNN-based HOI detection algorithms fail to predict correct outputs for unseen test data based on a limited number of available training examples. Herein, we propose a novel framework for HOI detection called the on-the-fly stacked generalization deep neural network (OSGNet). OSGNet consists of three main components: (1) feature extraction modules, (2) HOI relationship detection networks, and (3) a meta-learner for combining the outputs of sub-models. Here, components (1) and (2) are considered to be sub-models. Any task-based feature extraction modules, such as classification or human pose estimation modules, can be used as sub-models. To achieve on-the-fly stacked generalization, the sub-models and meta-learner are trained simultaneously. The sub-models are trained to provide complementary information, and the meta-learner improves the generalization performance for unseen test data. Extensive experiments demonstrate that the proposed method achieves state-of-the-art accuracy, particularly in cases involving rare classes. | - |
dc.language | 영어 | - |
dc.language.iso | en | - |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
dc.relation.isPartOf | IEEE ACCESS | - |
dc.title | Improved Human-Object Interaction Detection Through On-the-Fly Stacked Generalization | - |
dc.type | Article | - |
dc.type.rims | ART | - |
dc.description.journalClass | 1 | - |
dc.identifier.wosid | 000626306900001 | - |
dc.identifier.doi | 10.1109/ACCESS.2021.3061208 | - |
dc.identifier.bibliographicCitation | IEEE ACCESS, v.9, pp.34251 - 34263 | - |
dc.description.isOpenAccess | N | - |
dc.identifier.scopusid | 2-s2.0-85101760318 | - |
dc.citation.endPage | 34263 | - |
dc.citation.startPage | 34251 | - |
dc.citation.title | IEEE ACCESS | - |
dc.citation.volume | 9 | - |
dc.contributor.affiliatedAuthor | Lee, Geonu | - |
dc.contributor.affiliatedAuthor | Cho, Jungchan | - |
dc.type.docType | Article | - |
dc.subject.keywordAuthor | Feature extraction | - |
dc.subject.keywordAuthor | Task analysis | - |
dc.subject.keywordAuthor | Pose estimation | - |
dc.subject.keywordAuthor | Neural networks | - |
dc.subject.keywordAuthor | Visualization | - |
dc.subject.keywordAuthor | Training | - |
dc.subject.keywordAuthor | Stacking | - |
dc.subject.keywordAuthor | Deep learning | - |
dc.subject.keywordAuthor | human-object interaction | - |
dc.subject.keywordAuthor | human pose estimation | - |
dc.subject.keywordAuthor | action recognition | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalResearchArea | Telecommunications | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.relation.journalWebOfScienceCategory | Telecommunications | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
1342, Seongnam-daero, Sujeong-gu, Seongnam-si, Gyeonggi-do, Republic of Korea(13120)031-750-5114
COPYRIGHT 2020 Gachon University All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.