Detailed Information

Cited 2 time in webofscience Cited 2 time in scopus
Metadata Downloads

Optimal prototype selection for speech emotion recognition using fuzzy k-important nearest neighbour

Full metadata record
DC Field Value Language
dc.contributor.authorZhang, Zhen Xing-
dc.contributor.authorLim, Joon Shik-
dc.contributor.authorJiang, Zhao Cai-
dc.contributor.authorZhou, Chun Jie-
dc.contributor.authorLi, Shao Jing-
dc.date.available2020-02-28T06:44:19Z-
dc.date.created2020-02-06-
dc.date.issued2016-
dc.identifier.issn1754-3916-
dc.identifier.urihttps://scholarworks.bwise.kr/gachon/handle/2020.sw.gachon/9764-
dc.description.abstractSpeech emotion recognition has been a popular topic of affective computing. Accuracy in speech emotion recognition depends on selecting the optimal prototype. In this paper, a new 2-D emotional speech recognition model based on a fuzzy k-important nearest neighbour (FKINN) and neuro-fuzzy network is described. In the FKINN algorithm, an important nearest neighbour selection rule is introduced. The neuro-fuzzy network applies a bounded sum of weighted fuzzy membership functions (BSWFM). During the training process, BSWFM calculates the Takagi-Sugeno defuzzification values for the 2-D visual model. The emotional speech signals used in this work were obtained from the Berlin emotional speech database. The proposed new model achieves 83.5% overall classification accuracy with the 2-D emotional speech recognition model. The classification accuracies of anger, happiness, sadness, and neutral were 94.1%, 65.9%, 81.1%, and 87.5%, respectively.-
dc.language영어-
dc.language.isoen-
dc.publisherINDERSCIENCE ENTERPRISES LTD-
dc.relation.isPartOfINTERNATIONAL JOURNAL OF COMMUNICATION NETWORKS AND DISTRIBUTED SYSTEMS-
dc.subjectCLASSIFICATION-
dc.subjectSETS-
dc.titleOptimal prototype selection for speech emotion recognition using fuzzy k-important nearest neighbour-
dc.typeArticle-
dc.type.rimsART-
dc.description.journalClass1-
dc.identifier.wosid000386595800001-
dc.identifier.doi10.1504/IJCNDS.2016.079096-
dc.identifier.bibliographicCitationINTERNATIONAL JOURNAL OF COMMUNICATION NETWORKS AND DISTRIBUTED SYSTEMS, v.17, no.2, pp.103 - 119-
dc.identifier.scopusid2-s2.0-85006355293-
dc.citation.endPage119-
dc.citation.startPage103-
dc.citation.titleINTERNATIONAL JOURNAL OF COMMUNICATION NETWORKS AND DISTRIBUTED SYSTEMS-
dc.citation.volume17-
dc.citation.number2-
dc.contributor.affiliatedAuthorLim, Joon Shik-
dc.type.docTypeArticle-
dc.subject.keywordAuthorspeech emotion recognition-
dc.subject.keywordAuthorprototype selection-
dc.subject.keywordAuthornearest neighbour-
dc.subject.keywordPlusCLASSIFICATION-
dc.subject.keywordPlusSETS-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalWebOfScienceCategoryComputer Science, Information Systems-
dc.description.journalRegisteredClassscopus-
Files in This Item
There are no files associated with this item.
Appears in
Collections
IT융합대학 > 컴퓨터공학과 > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Lim, Joon Shik photo

Lim, Joon Shik
College of IT Convergence (컴퓨터공학부(컴퓨터공학전공))
Read more

Altmetrics

Total Views & Downloads

BROWSE