Detailed Information

Cited 4 time in webofscience Cited 4 time in scopus
Metadata Downloads

Robust Human Activity Recognition by Integrating Image and Accelerometer Sensor Data Using Deep Fusion Network

Full metadata record
DC Field Value Language
dc.contributor.authorKang, Junhyuk-
dc.contributor.authorShin, Jieun-
dc.contributor.authorShin, Jaewon-
dc.contributor.authorLee, Daeho-
dc.contributor.authorChoi, Ahyoung-
dc.date.accessioned2022-02-12T01:40:31Z-
dc.date.available2022-02-12T01:40:31Z-
dc.date.created2022-01-19-
dc.date.issued2022-01-
dc.identifier.issn1424-8220-
dc.identifier.urihttps://scholarworks.bwise.kr/gachon/handle/2020.sw.gachon/83473-
dc.description.abstractStudies on deep-learning-based behavioral pattern recognition have recently received considerable attention. However, if there are insufficient data and the activity to be identified is changed, a robust deep learning model cannot be created. This work contributes a generalized deep learning model that is robust to noise not dependent on input signals by extracting features through a deep learning model for each heterogeneous input signal that can maintain performance while minimizing preprocessing of the input signal. We propose a hybrid deep learning model that takes heterogeneous sensor data, an acceleration sensor, and an image as inputs. For accelerometer data, we use a convolutional neural network (CNN) and convolutional block attention module models (CBAM), and apply bidirectional long short-term memory and a residual neural network. The overall accuracy was 94.8% with a skeleton image and accelerometer data, and 93.1% with a skeleton image, coordinates, and accelerometer data after evaluating nine behaviors using the Berkeley Multimodal Human Action Database (MHAD). Furthermore, the accuracy of the investigation was revealed to be 93.4% with inverted images and 93.2% with white noise added to the accelerometer data. Testing with data that included inversion and noise data indicated that the suggested model was robust, with a performance deterioration of approximately 1%. © 2021 by the authors. Licensee MDPI, Basel, Switzerland.-
dc.language영어-
dc.language.isoen-
dc.publisherMDPI-
dc.relation.isPartOfSensors-
dc.titleRobust Human Activity Recognition by Integrating Image and Accelerometer Sensor Data Using Deep Fusion Network-
dc.typeArticle-
dc.type.rimsART-
dc.description.journalClass1-
dc.identifier.wosid000751291200001-
dc.identifier.doi10.3390/s22010174-
dc.identifier.bibliographicCitationSensors, v.22, no.1-
dc.description.isOpenAccessN-
dc.identifier.scopusid2-s2.0-85121686689-
dc.citation.titleSensors-
dc.citation.volume22-
dc.citation.number1-
dc.contributor.affiliatedAuthorKang, Junhyuk-
dc.contributor.affiliatedAuthorShin, Jieun-
dc.contributor.affiliatedAuthorShin, Jaewon-
dc.contributor.affiliatedAuthorLee, Daeho-
dc.contributor.affiliatedAuthorChoi, Ahyoung-
dc.type.docTypeArticle-
dc.subject.keywordAuthorAccelerometer sensors-
dc.subject.keywordAuthorDeep learning-
dc.subject.keywordAuthorFusion network-
dc.subject.keywordAuthorHuman activity recognition-
dc.subject.keywordAuthorSkeleton detection-
dc.relation.journalResearchAreaChemistry-
dc.relation.journalResearchAreaEngineering-
dc.relation.journalResearchAreaInstruments & Instrumentation-
dc.relation.journalWebOfScienceCategoryChemistry, Analytical-
dc.relation.journalWebOfScienceCategoryEngineering, Electrical & Electronic-
dc.relation.journalWebOfScienceCategoryInstruments & Instrumentation-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
Files in This Item
There are no files associated with this item.
Appears in
Collections
IT융합대학 > 소프트웨어학과 > 1. Journal Articles
공과대학 > 기계공학과 > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Lee, Dae Ho photo

Lee, Dae Ho
Engineering (기계·스마트·산업공학부(기계공학전공))
Read more

Altmetrics

Total Views & Downloads

BROWSE