Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Improved Speech Emotion Recognition Focusing on High-Level Data Representations and Swift Feature Extraction Calculation

Full metadata record
DC Field Value Language
dc.contributor.authorAbdusalomov, Akmalbek-
dc.contributor.authorKutlimuratov, Alpamis-
dc.contributor.authorNasimov, Rashid-
dc.contributor.authorWhangbo, Taeg Keun-
dc.date.accessioned2024-03-14T12:31:43Z-
dc.date.available2024-03-14T12:31:43Z-
dc.date.issued2023-12-
dc.identifier.issn1546-2218-
dc.identifier.issn1546-2226-
dc.identifier.urihttps://scholarworks.bwise.kr/gachon/handle/2020.sw.gachon/90690-
dc.description.abstractThe performance of a speech emotion recognition (SER) system is heavily influenced by the efficacy of its feature extraction techniques. The study was designed to advance the field of SER by optimizing feature extraction techniques, specifically through the incorporation of high-resolution Mel-spectrograms and the expedited calculation of Mel Frequency Cepstral Coefficients (MFCC). This initiative aimed to refine the system's accuracy by identifying and mitigating the shortcomings commonly found in current approaches. Ultimately, the primary objective was to elevate both the intricacy and effectiveness of our SER model, with a focus on augmenting its proficiency in the accurate identification of emotions in spoken language. The research employed a dual-strategy approach for feature extraction. Firstly, a rapid computation technique for MFCC was implemented and integrated with a Bi-LSTM layer to optimize the encoding of MFCC features. Secondly, a pretrained ResNet model was utilized in conjunction with feature Stats pooling and dense layers for the effective encoding of Mel-spectrogram attributes. These two sets of features underwent separate processing before being combined in a Convolutional Neural Network (CNN) outfitted with a dense layer, with the aim of enhancing their representational richness. The model was rigorously evaluated using two prominent databases: CMU-MOSEI and RAVDESS. Notable findings include an accuracy rate of 93.2% on the CMU-MOSEI database and 95.3% on the RAVDESS database. Such exceptional performance underscores the efficacy of this innovative approach, which not only meets but also exceeds the accuracy benchmarks established by traditional models in the field of speech emotion recognition.-
dc.format.extent19-
dc.language영어-
dc.language.isoENG-
dc.publisherTECH SCIENCE PRESS-
dc.titleImproved Speech Emotion Recognition Focusing on High-Level Data Representations and Swift Feature Extraction Calculation-
dc.typeArticle-
dc.identifier.wosid001156830100013-
dc.identifier.doi10.32604/cmc.2023.044466-
dc.identifier.bibliographicCitationCMC-COMPUTERS MATERIALS & CONTINUA, v.77, no.3, pp 2915 - 2933-
dc.description.isOpenAccessY-
dc.identifier.scopusid2-s2.0-85181038051-
dc.citation.endPage2933-
dc.citation.startPage2915-
dc.citation.titleCMC-COMPUTERS MATERIALS & CONTINUA-
dc.citation.volume77-
dc.citation.number3-
dc.type.docTypeArticle-
dc.publisher.location미국-
dc.subject.keywordAuthorFeature extraction-
dc.subject.keywordAuthorMFCC-
dc.subject.keywordAuthorResNet-
dc.subject.keywordAuthorspeech emotion recognition-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalResearchAreaMaterials Science-
dc.relation.journalWebOfScienceCategoryComputer Science, Information Systems-
dc.relation.journalWebOfScienceCategoryMaterials Science, Multidisciplinary-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
Files in This Item
There are no files associated with this item.
Appears in
Collections
ETC > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher ,  photo

,
College of IT Convergence (Department of Software)
Read more

Altmetrics

Total Views & Downloads

BROWSE