Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

The performance of EEG-based auditory attention decoding according to the speech volume in a dichotic listening task: a preliminary study

Full metadata record
DC Field Value Language
dc.contributor.authorHa, Jiyeon-
dc.contributor.authorLim, Yoonseob-
dc.contributor.authorChung, Jae Ho-
dc.date.accessioned2023-07-24T09:53:00Z-
dc.date.available2023-07-24T09:53:00Z-
dc.date.created2023-07-04-
dc.date.issued2022-10-
dc.identifier.issn2226-7808-
dc.identifier.urihttps://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/187514-
dc.description.abstractAuditory attention decoding (AAD) has been developed to detect the attended speech using electroencephalography. Recently, a real-time AAD method using linear decoder model has been introduced lead to expand the application of AAD in terms of auditory brain computer interface. However, people focus on sounds of varying volumes in everyday conversation, so the question of whether AAD is possible in various sound stimuli arises. The present study aimed to investigate the effect of the difference in volume between two competing speech in a dichotic listening paradigm for online AAD. Most comfortable level (MCL) and dichotic speech recognition threshold (SRT) were evaluated. And the 'Sound level for Speech intelligibility (SI) of 90%' and 'Sound Level for SI 50%' were also calculated. In online AAD task, four different sound level (MCL, MCL-20dBA, Sound level for SI of 90% and SI 50%) were introduced as attended speech, while the sound level of ignored speeches was fixed at the MCL. There was no difference in AAD performance based on the four sound levels conditions. In addition, volume difference was not significantly correlated with the individual decoder accuracy. This preliminary study identified that lowering attended speech volume to SRT level had no effect on AAD.-
dc.language영어-
dc.language.isoen-
dc.publisherInternational Commission for Acoustics (ICA)-
dc.titleThe performance of EEG-based auditory attention decoding according to the speech volume in a dichotic listening task: a preliminary study-
dc.typeArticle-
dc.contributor.affiliatedAuthorChung, Jae Ho-
dc.identifier.scopusid2-s2.0-85162291578-
dc.identifier.bibliographicCitationProceedings of the International Congress on Acoustics-
dc.relation.isPartOfProceedings of the International Congress on Acoustics-
dc.citation.titleProceedings of the International Congress on Acoustics-
dc.type.rimsART-
dc.type.docTypeConference paper-
dc.description.journalClass1-
dc.description.isOpenAccessN-
dc.description.journalRegisteredClassscopus-
dc.subject.keywordPlusBrain computer interface-
dc.subject.keywordPlusDecoding-
dc.subject.keywordPlusElectrophysiology-
dc.subject.keywordPlusSpeech intelligibility-
dc.subject.keywordPlusSpeech recognition-
dc.subject.keywordPlusAuditory attention-
dc.subject.keywordPlusDecoding methods-
dc.subject.keywordPlusDichotic listening-
dc.subject.keywordPlusDichotic speech recognition threshold-
dc.subject.keywordPlusMost comfortable level-
dc.subject.keywordPlusOnline auditory attention decoding-
dc.subject.keywordPlusPerformance-
dc.subject.keywordPlusReal- time-
dc.subject.keywordPlusRecognition threshold-
dc.subject.keywordPlusSound&apos-
dc.subject.keywordPluss levels-
dc.subject.keywordPlusElectroencephalography-
dc.subject.keywordAuthordichotic speech recognition threshold-
dc.subject.keywordAuthorElectroencephalography-
dc.subject.keywordAuthormost comfortable level-
dc.subject.keywordAuthorOnline auditory attention decoding-
Files in This Item
There are no files associated with this item.
Appears in
Collections
서울 의과대학 > 서울 이비인후과학교실 > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Chung, Jae Ho photo

Chung, Jae Ho
COLLEGE OF MEDICINE (DEPARTMENT OF OTOLARYNGOLOGY)
Read more

Altmetrics

Total Views & Downloads

BROWSE