Detailed Information

Cited 0 time in webofscience Cited 1 time in scopus
Metadata Downloads

Attention-based neural network for end-to-end music separation

Full metadata record
DC Field Value Language
dc.contributor.authorWang, Jing-
dc.contributor.authorLiu, Hanyue-
dc.contributor.authorYing, Haorong-
dc.contributor.authorQiu, Chuhan-
dc.contributor.authorLi, Jingxin-
dc.contributor.authorAnwar, Muhammad Shahid-
dc.date.accessioned2023-03-22T05:40:08Z-
dc.date.available2023-03-22T05:40:08Z-
dc.date.created2023-02-14-
dc.date.issued2023-06-
dc.identifier.issn2468-6557-
dc.identifier.urihttps://scholarworks.bwise.kr/gachon/handle/2020.sw.gachon/87197-
dc.description.abstractThe end-to-end separation algorithm with superior performance in the field of speech separation has not been effectively used in music separation. Moreover, since music signals are often dual channel data with a high sampling rate, how to model long-sequence data and make rational use of the relevant information between channels is also an urgent problem to be solved. In order to solve the above problems, the performance of the end-to-end music separation algorithm is enhanced by improving the network structure. Our main contributions include the following: (1) A more reasonable densely connected U-Net is designed to capture the long-term characteristics of music, such as main melody, tone and so on. (2) On this basis, the multi-head attention and dual-path transformer are introduced in the separation module. Channel attention units are applied recursively on the feature map of each layer of the network, enabling the network to perform long-sequence separation. Experimental results show that after the introduction of the channel attention, the performance of the proposed algorithm has a stable improvement compared with the baseline system. On the MUSDB18 dataset, the average score of the separated audio exceeds that of the current best-performing music separation algorithm based on the time-frequency domain (T-F domain).-
dc.language영어-
dc.language.isoen-
dc.publisherWILEY-
dc.relation.isPartOfCAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY-
dc.titleAttention-based neural network for end-to-end music separation-
dc.typeArticle-
dc.type.rimsART-
dc.description.journalClass1-
dc.identifier.wosid000914034200001-
dc.identifier.doi10.1049/cit2.12163-
dc.identifier.bibliographicCitationCAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY, v.8, no.2, pp.355 - 363-
dc.description.isOpenAccessY-
dc.identifier.scopusid2-s2.0-85147000869-
dc.citation.endPage363-
dc.citation.startPage355-
dc.citation.titleCAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY-
dc.citation.volume8-
dc.citation.number2-
dc.contributor.affiliatedAuthorAnwar, Muhammad Shahid-
dc.type.docTypeArticle-
dc.subject.keywordAuthorchannel attention-
dc.subject.keywordAuthordensely connected network-
dc.subject.keywordAuthorend-to-end music separation-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalWebOfScienceCategoryComputer Science, Artificial Intelligence-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
Files in This Item
There are no files associated with this item.
Appears in
Collections
ETC > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Anwar, Muhammad Shahid photo

Anwar, Muhammad Shahid
College of IT Convergence (Department of Software)
Read more

Altmetrics

Total Views & Downloads

BROWSE