Structured patch model for a unified automatic and interactive segmentation framework
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Park, Sang Hyun | - |
dc.contributor.author | Lee, Soochahn | - |
dc.contributor.author | Yun, Il Dong | - |
dc.contributor.author | Lee, Sang Uk | - |
dc.date.accessioned | 2021-08-11T19:46:05Z | - |
dc.date.available | 2021-08-11T19:46:05Z | - |
dc.date.issued | 2015-08 | - |
dc.identifier.issn | 1361-8415 | - |
dc.identifier.issn | 1361-8423 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/sch/handle/2021.sw.sch/10445 | - |
dc.description.abstract | We present a novel interactive segmentation framework incorporating a priori knowledge learned from training data. The knowledge is learned as a structured patch model (StPM) comprising sets of corresponding local patch priors and their pairwise spatial distribution statistics which represent the local shape and appearance along its boundary and the global shape structure, respectively. When successive user annotations are given, the StPM is appropriately adjusted in the target image and used together with the annotations to guide the segmentation. The StPM reduces the dependency on the placement and quantity of user annotations with little increase in complexity since the time-consuming StPM construction is performed offline. Furthermore, a seamless learning system can be established by directly adding the patch priors and the pairwise statistics of segmentation results to the StPM. The proposed method was evaluated on three datasets, respectively, of 20 chest CT, 3D knee MR, and 3D brain MR. The experimental results demonstrate that within an equal amount of time, the proposed interactive segmentation framework outperforms recent state-of-the-art methods in terms of accuracy, while it requires significantly less computing and editing time to obtain results with comparable accuracy. (C) 2015 Elsevier B.V. All rights reserved. | - |
dc.format.extent | 16 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | Elsevier BV | - |
dc.title | Structured patch model for a unified automatic and interactive segmentation framework | - |
dc.type | Article | - |
dc.publisher.location | 네델란드 | - |
dc.identifier.doi | 10.1016/j.media.2015.01.003 | - |
dc.identifier.scopusid | 2-s2.0-84938991055 | - |
dc.identifier.wosid | 000360252700022 | - |
dc.identifier.bibliographicCitation | Medical Image Analysis, v.24, no.1, pp 297 - 312 | - |
dc.citation.title | Medical Image Analysis | - |
dc.citation.volume | 24 | - |
dc.citation.number | 1 | - |
dc.citation.startPage | 297 | - |
dc.citation.endPage | 312 | - |
dc.type.docType | Article | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | sci | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalResearchArea | Radiology, Nuclear Medicine & Medical Imaging | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Interdisciplinary Applications | - |
dc.relation.journalWebOfScienceCategory | Engineering, Biomedical | - |
dc.relation.journalWebOfScienceCategory | Radiology, Nuclear Medicine & Medical Imaging | - |
dc.subject.keywordPlus | MULTI-ATLAS SEGMENTATION | - |
dc.subject.keywordPlus | ACTIVE SHAPE MODELS | - |
dc.subject.keywordPlus | MR-IMAGES | - |
dc.subject.keywordPlus | BRAIN IMAGES | - |
dc.subject.keywordPlus | LABEL FUSION | - |
dc.subject.keywordPlus | RANDOM-WALKS | - |
dc.subject.keywordPlus | REGISTRATION | - |
dc.subject.keywordPlus | HIPPOCAMPUS | - |
dc.subject.keywordPlus | EFFICIENT | - |
dc.subject.keywordAuthor | Structured patch model | - |
dc.subject.keywordAuthor | Interactive segmentation | - |
dc.subject.keywordAuthor | Adaptive prior | - |
dc.subject.keywordAuthor | Markov random field | - |
dc.subject.keywordAuthor | Incremental learning | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
(31538) 22, Soonchunhyang-ro, Asan-si, Chungcheongnam-do, Republic of Korea+82-41-530-1114
COPYRIGHT 2021 by SOONCHUNHYANG UNIVERSITY ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.