Detailed Information

Cited 0 time in webofscience Cited 9 time in scopus
Metadata Downloads

Text Detection and Classification from Low Quality Natural Images

Authors
Yasmeen, UjalaShah, Jamal HussainKhan, Muhammad AttiqueAnsari, Ghulam JillaniRehman, Saeed UrSharif, MuhammadKadry, SeifedineNam, Yunyoung
Issue Date
2020
Publisher
AutoSoft Press
Keywords
Feature points; K-means; deep learning; blur image; color spaces; classification
Citation
Intelligent Automation and Soft Computing, v.26, no.6, pp 1251 - 1266
Pages
16
Journal Title
Intelligent Automation and Soft Computing
Volume
26
Number
6
Start Page
1251
End Page
1266
URI
https://scholarworks.bwise.kr/sch/handle/2021.sw.sch/3676
DOI
10.32604/iasc.2020.012775
ISSN
1079-8587
2326-005X
Abstract
Detection of textual data from scene text images is a very thought-provoking issue in the field of computer graphics and visualization. This challenge is even more complicated when edge intelligent devices are involved in the process. The low-quality image having challenges such as blur, low resolution, and contrast make it more difficult for text detection and classification. Therefore, such exigent aspect is considered in the study. The technology proposed is comprised of three main contributions. (a) After synthetic blurring, the blurred image is preprocessed, and then the deblurring process is applied to recover the image. (b) Subsequently, the standard maximal stable extreme regions (MSER) technique is applied to localize and detect text. Soon after, K-Means is applied to get three different clusters of the query image to separate foreground and background and also incorporate character level grouping. (c) Finally, the segmented text is classified into textual and non-textual regions using a novel convolutional neural network (CNN) framework. The purpose of this task is to overcome the false positives. For evaluation of proposed technique, results are obtained on three mainstream datasets, including SVT, IIIT5K and ICDAR 2003. The achieved classification results of 90.3% for SVT dataset, 95.8% for IIIT5K dataset, and 94.0% for the ICDAR 2003 dataset, respectively. It shows the preeminence of the proposed methodology that it works fine for good model learning. Finally, the proposed methodology is compared with previous benchmark text-detection techniques to validate its contribution.
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Engineering > Department of Computer Science and Engineering > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Nam, Yun young photo

Nam, Yun young
College of Engineering (Department of Computer Science and Engineering)
Read more

Altmetrics

Total Views & Downloads

BROWSE