Optimal prototype selection for speech emotion recognition using fuzzy k-important nearest neighbour
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Zhang, Zhen Xing | - |
dc.contributor.author | Lim, Joon Shik | - |
dc.contributor.author | Jiang, Zhao Cai | - |
dc.contributor.author | Zhou, Chun Jie | - |
dc.contributor.author | Li, Shao Jing | - |
dc.date.available | 2020-02-28T06:44:19Z | - |
dc.date.created | 2020-02-06 | - |
dc.date.issued | 2016 | - |
dc.identifier.issn | 1754-3916 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/gachon/handle/2020.sw.gachon/9764 | - |
dc.description.abstract | Speech emotion recognition has been a popular topic of affective computing. Accuracy in speech emotion recognition depends on selecting the optimal prototype. In this paper, a new 2-D emotional speech recognition model based on a fuzzy k-important nearest neighbour (FKINN) and neuro-fuzzy network is described. In the FKINN algorithm, an important nearest neighbour selection rule is introduced. The neuro-fuzzy network applies a bounded sum of weighted fuzzy membership functions (BSWFM). During the training process, BSWFM calculates the Takagi-Sugeno defuzzification values for the 2-D visual model. The emotional speech signals used in this work were obtained from the Berlin emotional speech database. The proposed new model achieves 83.5% overall classification accuracy with the 2-D emotional speech recognition model. The classification accuracies of anger, happiness, sadness, and neutral were 94.1%, 65.9%, 81.1%, and 87.5%, respectively. | - |
dc.language | 영어 | - |
dc.language.iso | en | - |
dc.publisher | INDERSCIENCE ENTERPRISES LTD | - |
dc.relation.isPartOf | INTERNATIONAL JOURNAL OF COMMUNICATION NETWORKS AND DISTRIBUTED SYSTEMS | - |
dc.subject | CLASSIFICATION | - |
dc.subject | SETS | - |
dc.title | Optimal prototype selection for speech emotion recognition using fuzzy k-important nearest neighbour | - |
dc.type | Article | - |
dc.type.rims | ART | - |
dc.description.journalClass | 1 | - |
dc.identifier.wosid | 000386595800001 | - |
dc.identifier.doi | 10.1504/IJCNDS.2016.079096 | - |
dc.identifier.bibliographicCitation | INTERNATIONAL JOURNAL OF COMMUNICATION NETWORKS AND DISTRIBUTED SYSTEMS, v.17, no.2, pp.103 - 119 | - |
dc.identifier.scopusid | 2-s2.0-85006355293 | - |
dc.citation.endPage | 119 | - |
dc.citation.startPage | 103 | - |
dc.citation.title | INTERNATIONAL JOURNAL OF COMMUNICATION NETWORKS AND DISTRIBUTED SYSTEMS | - |
dc.citation.volume | 17 | - |
dc.citation.number | 2 | - |
dc.contributor.affiliatedAuthor | Lim, Joon Shik | - |
dc.type.docType | Article | - |
dc.subject.keywordAuthor | speech emotion recognition | - |
dc.subject.keywordAuthor | prototype selection | - |
dc.subject.keywordAuthor | nearest neighbour | - |
dc.subject.keywordPlus | CLASSIFICATION | - |
dc.subject.keywordPlus | SETS | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
dc.description.journalRegisteredClass | scopus | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
1342, Seongnam-daero, Sujeong-gu, Seongnam-si, Gyeonggi-do, Republic of Korea(13120)031-750-5114
COPYRIGHT 2020 Gachon University All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.