Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Multi-View Multi-Modal Head-Gaze Estimation for Advanced Indoor User Interaction

Full metadata record
DC Field Value Language
dc.contributor.authorKim, Jung-Hwa-
dc.contributor.authorJeong, Jin-Woo-
dc.date.accessioned2024-02-27T16:31:44Z-
dc.date.available2024-02-27T16:31:44Z-
dc.date.issued2022-01-
dc.identifier.issn1546-2218-
dc.identifier.issn1546-2226-
dc.identifier.urihttps://scholarworks.bwise.kr/kumoh/handle/2020.sw.kumoh/28247-
dc.description.abstractGaze estimation is one of the most promising technologies for supporting indoor monitoring and interaction systems. However, previous gaze estimation techniques generally work only in a controlled laboratory environment because they require a number of high-resolution eye images. This makes them unsuitable for welfare and healthcare facilities with the fol-lowing challenging characteristics: 1) users' continuous movements, 2) various lighting conditions, and 3) a limited amount of available data. To address these issues, we introduce a multi-view multi-modal head-gaze estimation system that translates the user's head orientation into the gaze direction. The proposed system captures the user using multiple cameras with depth and infrared modalities to train more robust gaze estimators under the aforementioned conditions. To this end, we implemented a deep learning pipeline that can handle different types and combinations of data. The proposed system was evaluated using the data collected from 10 volunteer participants to analyze how the use of single/multiple cameras and modalities affect the performance of head-gaze estimators. Through various experiments, we found that 1) an infrared-modality provides more useful features than a depth-modality, 2) multi-view multi-modal approaches provide better accuracy than single-view single-modal approaches, and 3) the proposed estimators achieve a high inference efficiency that can be used in real-time applications.-
dc.format.extent26-
dc.language영어-
dc.language.isoENG-
dc.publisherTECH SCIENCE PRESS-
dc.titleMulti-View Multi-Modal Head-Gaze Estimation for Advanced Indoor User Interaction-
dc.typeArticle-
dc.publisher.location미국-
dc.identifier.doi10.32604/cmc.2022.021107-
dc.identifier.wosid000707334500013-
dc.identifier.bibliographicCitationCMC-COMPUTERS MATERIALS & CONTINUA, v.70, no.3, pp 5107 - 5132-
dc.citation.titleCMC-COMPUTERS MATERIALS & CONTINUA-
dc.citation.volume70-
dc.citation.number3-
dc.citation.startPage5107-
dc.citation.endPage5132-
dc.type.docTypeArticle-
dc.description.isOpenAccessN-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalResearchAreaMaterials Science-
dc.relation.journalWebOfScienceCategoryComputer Science, Information Systems-
dc.relation.journalWebOfScienceCategoryMaterials Science, Multidisciplinary-
dc.subject.keywordPlusNETWORKS-
dc.subject.keywordAuthorHuman-computer interaction-
dc.subject.keywordAuthordeep learning-
dc.subject.keywordAuthorhead-gaze estima-tion-
dc.subject.keywordAuthorindoor monitoring-
Files in This Item
There are no files associated with this item.
Appears in
Collections
ETC > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetrics

Total Views & Downloads

BROWSE