Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Facial electromyogram-based facial gesture recognition for hands-free control of an AR/VR environment: optimal gesture set selection and validation of feasibility as an assistive technology

Authors
Kim, ChunghwanKim, ChaeyoonKim, HyunSubKwak, HwyKuenLee, WooJinIm, Chang-Hwan
Issue Date
Aug-2023
Publisher
SPRINGERNATURE
Keywords
Facial electromyogram; Facial expression; Virtual reality; Augmented reality; Assistive device
Citation
BIOMEDICAL ENGINEERING LETTERS, v.13, no.SI 3, pp.465 - 473
Indexed
SCIE
SCOPUS
KCI
Journal Title
BIOMEDICAL ENGINEERING LETTERS
Volume
13
Number
SI 3
Start Page
465
End Page
473
URI
https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/192046
DOI
10.1007/s13534-023-00277-9
ISSN
2093-9868
Abstract
The rapid expansion of virtual reality (VR) and augmented reality (AR) into various applications has increased the demand for hands-free input interfaces when traditional control methods are inapplicable (e.g., for paralyzed individuals who cannot move their hands). Facial electromyogram (fEMG), bioelectric signals generated from facial muscles, could solve this problem. Discriminating facial gestures using fEMG is possible because fEMG signals vary with these gestures. Thus, these signals can be used to generate discrete hands-free control commands. This study implemented an fEMG-based facial gesture recognition system for generating discrete commands to control an AR or VR environment. The fEMG signals around the eyes were recorded, assuming that the fEMG electrodes were embedded into the VR head-mounted display (HMD). Sixteen discrete facial gestures were classified using linear discriminant analysis (LDA) with Riemannian geometry features. Because the fEMG electrodes were far from the facial muscles associated with the facial gestures, some similar facial gestures were indistinguishable from each other. Therefore, this study determined the best facial gesture combinations with the highest classification accuracy for 3-15 commands. An analysis of the fEMG data acquired from 15 participants showed that the optimal facial gesture combinations increased the accuracy by 4.7%p compared with randomly selected facial gesture combinations. Moreover, this study is the first to investigate the feasibility of implementing a subject-independent facial gesture recognition system that does not require individual user training sessions. Lastly, our online hands-free control system was successfully applied to a media player to demonstrate the applicability of the proposed system.
Files in This Item
Go to Link
Appears in
Collections
ETC > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Im, Chang Hwan photo

Im, Chang Hwan
COLLEGE OF ENGINEERING (서울 바이오메디컬공학전공)
Read more

Altmetrics

Total Views & Downloads

BROWSE