Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Maximum entropy scaled super pixels segmentation for multi-object detection and scene recognition via deep belief network

Authors
Rafique, Adnan AhmedGochoo, MunkhjargalJalal, AhmadKim, Kibum
Issue Date
Aug-2022
Publisher
Springer Nature
Keywords
Bag of features; Deep belief network; Entropy-scaled segmentation; Super-pixels
Citation
Multimedia Tools and Applications, pp 1 - 30
Pages
30
Indexed
SCIE
SCOPUS
Journal Title
Multimedia Tools and Applications
Start Page
1
End Page
30
URI
https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/111487
DOI
10.1007/s11042-022-13717-y
ISSN
1380-7501
1573-7721
Abstract
Recent advances in visionary technologies impacted multi-object recognition and scene understanding. Such scene-understanding tasks are a demanding part of several technologies such as augmented reality based scene integration, robotic navigation, autonomous driving and tourist guide applications. Incorporating visual information in contextually unified segments, super-pixel-based approaches significantly mitigate the clutter, which is normal in pixel wise frameworks during scene understanding. Super-pixels allow customized shapes and variable size patches of connected components to be obtained. Furthermore, the computational time for these segmentation approaches can significantly decreased due to the reduced number of super-pixel target clusters. Hence, the super pixel-based approaches are more commonly used in robotics, computer vision and other intelligent systems. In this paper, we propose a Maximum Entropy scaled Super-Pixels (MEsSP) Segmentation method that encapsulates super-pixel segmentation based on an Entropy Model and utilizes local energy terms to label the pixels. Initially, after acquisition and pre-processing, image is segmented by two different methods: Fuzzy C-Means (FCM) and MEsSP. Then, to extract the features from these segmented objects, the dynamic geometrical features, fast Fourier transform (FFT), blob extraction, Maximally Stable Extremal Regions (MSER) and KAZE features are extracted using the bag of features approach. Then, to categorize the objects, multiple kernel learning is applied. Finally, a deep belief network (DBN) assigns the relevant labels to the scenes based on the categorized objects, intersection over union scores and dice similarity coefficient. The experimental results regarding multiple objects recognition accuracy, precision, recall and F1 scores over PASCAL VOC, Caltech 101 and UIUC Sports datasets show a remarkable performance. In addition, the evaluation of proposed scene recognition method over these benchmark datasets outperforms the state of the art (SOTA) methods.
Files in This Item
Go to Link
Appears in
Collections
COLLEGE OF COMPUTING > SCHOOL OF MEDIA, CULTURE, AND DESIGN TECHNOLOGY > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Kim, Kibum photo

Kim, Kibum
COLLEGE OF COMPUTING (SCHOOL OF MEDIA, CULTURE, AND DESIGN TECHNOLOGY)
Read more

Altmetrics

Total Views & Downloads

BROWSE