D-former: a U-shaped Dilated Transformer for 3D medical image segmentation
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Wu, Yixuan | - |
dc.contributor.author | Liao, Kuanlun | - |
dc.contributor.author | Chen, Jintai | - |
dc.contributor.author | Wang, Jinhong | - |
dc.contributor.author | Chen, Danny Z. | - |
dc.contributor.author | Gao, Honghao | - |
dc.contributor.author | Wu, Jian | - |
dc.date.accessioned | 2023-01-29T05:40:06Z | - |
dc.date.available | 2023-01-29T05:40:06Z | - |
dc.date.created | 2022-11-08 | - |
dc.date.issued | 2023-01 | - |
dc.identifier.issn | 0941-0643 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/gachon/handle/2020.sw.gachon/86799 | - |
dc.description.abstract | Computer-aided medical image segmentation has been applied widely in diagnosis and treatment to obtain clinically useful information of shapes and volumes of target organs and tissues. In the past several years, convolutional neural network (CNN)-based methods (e.g., U-Net) have dominated this area, but still suffered from inadequate long-range information capturing. Hence, recent work presented computer vision Transformer variants for medical image segmentation tasks and obtained promising performances. Such Transformers modeled long-range dependency by computing pair-wise patch relations. However, they incurred prohibitive computational costs, especially on 3D medical images (e.g., CT and MRI). In this paper, we propose a new method called Dilated Transformer, which conducts self-attention alternately in local and global scopes for pair-wise patch relations capturing. Inspired by dilated convolution kernels, we conduct the global self-attention in a dilated manner, enlarging receptive fields without increasing the patches involved and thus reducing computational costs. Based on this design of Dilated Transformer, we construct a U-shaped encoder-decoder hierarchical architecture called D-Former for 3D medical image segmentation. Experiments on the Synapse and ACDC datasets show that our D-Former model, trained from scratch, outperforms various competitive CNN-based or Transformer-based segmentation models at a low computational cost without time-consuming per-training process. | - |
dc.language | 영어 | - |
dc.language.iso | en | - |
dc.publisher | SPRINGER LONDON LTD | - |
dc.relation.isPartOf | NEURAL COMPUTING & APPLICATIONS | - |
dc.title | D-former: a U-shaped Dilated Transformer for 3D medical image segmentation | - |
dc.type | Article | - |
dc.type.rims | ART | - |
dc.description.journalClass | 1 | - |
dc.identifier.wosid | 000864595700003 | - |
dc.identifier.doi | 10.1007/s00521-022-07859-1 | - |
dc.identifier.bibliographicCitation | NEURAL COMPUTING & APPLICATIONS, v.35, no.2, pp.1931 - 1944 | - |
dc.description.isOpenAccess | N | - |
dc.identifier.scopusid | 2-s2.0-85139609457 | - |
dc.citation.endPage | 1944 | - |
dc.citation.startPage | 1931 | - |
dc.citation.title | NEURAL COMPUTING & APPLICATIONS | - |
dc.citation.volume | 35 | - |
dc.citation.number | 2 | - |
dc.contributor.affiliatedAuthor | Gao, Honghao | - |
dc.type.docType | Article | - |
dc.subject.keywordAuthor | Medical image analysis | - |
dc.subject.keywordAuthor | Segmentation | - |
dc.subject.keywordAuthor | Transformer | - |
dc.subject.keywordAuthor | Long-range dependency | - |
dc.subject.keywordAuthor | Position encoding | - |
dc.subject.keywordPlus | NETWORKS | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
1342, Seongnam-daero, Sujeong-gu, Seongnam-si, Gyeonggi-do, Republic of Korea(13120)031-750-5114
COPYRIGHT 2020 Gachon University All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.