Adaptive FOA region extraction for saliency-based visual attention
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lee, H. | - |
dc.contributor.author | Bae, C. | - |
dc.contributor.author | Lee, J. | - |
dc.contributor.author | Sohn, S. | - |
dc.date.available | 2019-05-29T09:35:36Z | - |
dc.date.issued | 2012-07 | - |
dc.identifier.issn | 2093-4009 | - |
dc.identifier.issn | 2233-940X | - |
dc.identifier.uri | https://scholarworks.bwise.kr/cau/handle/2019.sw.cau/20929 | - |
dc.description.abstract | This paper describes an adaptive extraction of focus of attention region for saliency-based visual attention. The saliency map model generates the most salient and significant location in the visual scene. In human brain, there is an inhibition of return property for which current attending point is prevented from being attended again. Therefore, we need to pay attention to the focus of attention and inhibition of return function by employing an appropriate mask for the salient region and shapedbased mask is maybe more suitable than any other masks. On the contrary to the existing fixed-size FOA, we proposed an adaptive and shape-based FOA region according to the most salient region from saliency map. We determine the most salient point by checking every value in saliency map, and expand the neighborhood of the point until the average value of the neighborhood is smaller than 75% value of the most salient point, and then find the contour of the neighborhood. Therefore our adaptive FOA is close to the shape of attended object and it is efficient to the object recognition or other computer vision fields. | - |
dc.format.extent | 8 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.title | Adaptive FOA region extraction for saliency-based visual attention | - |
dc.type | Article | - |
dc.identifier.doi | 10.4156/ijipm.vol3.issue3.5 | - |
dc.identifier.bibliographicCitation | International Journal of Information Processing and Management, v.3, no.3, pp 36 - 43 | - |
dc.description.isOpenAccess | N | - |
dc.identifier.scopusid | 2-s2.0-84871353894 | - |
dc.citation.endPage | 43 | - |
dc.citation.number | 3 | - |
dc.citation.startPage | 36 | - |
dc.citation.title | International Journal of Information Processing and Management | - |
dc.citation.volume | 3 | - |
dc.identifier.url | http://www.globalcis.org/dl/citation.html?id=IJIPM-116&Search=Adaptive%20FOA%20region%20extraction%20for%20saliency-based%20visual%20attention&op=Title | - |
dc.type.docType | Article | - |
dc.publisher.location | 대한민국 | - |
dc.subject.keywordAuthor | Exogenous | - |
dc.subject.keywordAuthor | Focus of Attention | - |
dc.subject.keywordAuthor | Saliency | - |
dc.subject.keywordAuthor | Visual attention | - |
dc.description.journalRegisteredClass | scopus | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
84, Heukseok-ro, Dongjak-gu, Seoul, Republic of Korea (06974)02-820-6194
COPYRIGHT 2019 Chung-Ang University All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.