Text Detection and Classification from Low Quality Natural Images
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Yasmeen, Ujala | - |
dc.contributor.author | Shah, Jamal Hussain | - |
dc.contributor.author | Khan, Muhammad Attique | - |
dc.contributor.author | Ansari, Ghulam Jillani | - |
dc.contributor.author | Rehman, Saeed Ur | - |
dc.contributor.author | Sharif, Muhammad | - |
dc.contributor.author | Kadry, Seifedine | - |
dc.contributor.author | Nam, Yunyoung | - |
dc.date.accessioned | 2021-08-11T08:43:42Z | - |
dc.date.available | 2021-08-11T08:43:42Z | - |
dc.date.issued | 2020 | - |
dc.identifier.issn | 1079-8587 | - |
dc.identifier.issn | 2326-005X | - |
dc.identifier.uri | https://scholarworks.bwise.kr/sch/handle/2021.sw.sch/3676 | - |
dc.description.abstract | Detection of textual data from scene text images is a very thought-provoking issue in the field of computer graphics and visualization. This challenge is even more complicated when edge intelligent devices are involved in the process. The low-quality image having challenges such as blur, low resolution, and contrast make it more difficult for text detection and classification. Therefore, such exigent aspect is considered in the study. The technology proposed is comprised of three main contributions. (a) After synthetic blurring, the blurred image is preprocessed, and then the deblurring process is applied to recover the image. (b) Subsequently, the standard maximal stable extreme regions (MSER) technique is applied to localize and detect text. Soon after, K-Means is applied to get three different clusters of the query image to separate foreground and background and also incorporate character level grouping. (c) Finally, the segmented text is classified into textual and non-textual regions using a novel convolutional neural network (CNN) framework. The purpose of this task is to overcome the false positives. For evaluation of proposed technique, results are obtained on three mainstream datasets, including SVT, IIIT5K and ICDAR 2003. The achieved classification results of 90.3% for SVT dataset, 95.8% for IIIT5K dataset, and 94.0% for the ICDAR 2003 dataset, respectively. It shows the preeminence of the proposed methodology that it works fine for good model learning. Finally, the proposed methodology is compared with previous benchmark text-detection techniques to validate its contribution. | - |
dc.format.extent | 16 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | AutoSoft Press | - |
dc.title | Text Detection and Classification from Low Quality Natural Images | - |
dc.type | Article | - |
dc.publisher.location | 미국 | - |
dc.identifier.doi | 10.32604/iasc.2020.012775 | - |
dc.identifier.scopusid | 2-s2.0-85101362514 | - |
dc.identifier.wosid | 000618601400005 | - |
dc.identifier.bibliographicCitation | Intelligent Automation and Soft Computing, v.26, no.6, pp 1251 - 1266 | - |
dc.citation.title | Intelligent Automation and Soft Computing | - |
dc.citation.volume | 26 | - |
dc.citation.number | 6 | - |
dc.citation.startPage | 1251 | - |
dc.citation.endPage | 1266 | - |
dc.type.docType | Article | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Automation & Control Systems | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalWebOfScienceCategory | Automation & Control Systems | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
dc.subject.keywordPlus | NEURAL-NETWORK | - |
dc.subject.keywordPlus | RECOGNITION | - |
dc.subject.keywordPlus | COMPETITION | - |
dc.subject.keywordAuthor | Feature points | - |
dc.subject.keywordAuthor | K-means | - |
dc.subject.keywordAuthor | deep learning | - |
dc.subject.keywordAuthor | blur image | - |
dc.subject.keywordAuthor | color spaces | - |
dc.subject.keywordAuthor | classification | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
(31538) 22, Soonchunhyang-ro, Asan-si, Chungcheongnam-do, Republic of Korea+82-41-530-1114
COPYRIGHT 2021 by SOONCHUNHYANG UNIVERSITY ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.