Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Capacity-aware key partitioning scheme for heterogeneous big data analytic engines

Authors
Hanif, MuhammadLee, Choonhwa
Issue Date
Mar-2018
Publisher
Institute of Electrical and Electronics Engineers Inc.
Keywords
Cloud and Distributed Computing; Context-aware Partitioning; Hadoop MapReduce; Heterogeneous Systems
Citation
International Conference on Advanced Communication Technology, ICACT, v.2018-February, pp.999 - 1007
Indexed
SCOPUS
Journal Title
International Conference on Advanced Communication Technology, ICACT
Volume
2018-February
Start Page
999
End Page
1007
URI
https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/150409
DOI
10.23919/ICACT.2018.8323922
ISSN
1738-9445
Abstract
Big data and cloud computing became the centre of interest for the past decade. With the increase of data size and different cloud application, the idea of big data analytics become very popular both in industry and academia. The research communities in industry and academia never stopped trying to come up with the fast, robust, and fault tolerant analytic engines. MapReduce becomes one of the popular big data analytic engine over the past few years. Hadoop is a standard implementation of MapReduce framework for running data-intensive applications on the clusters of commodity servers. By thoroughly studying the framework we find out that the shuffle phase, all-to-all input data fetching phase in reduce task significantly affect the application performance. There is a problem of variance in both the intermediate key's frequencies and their distribution among data nodes throughout the cluster in Hadoop's MapReduce system. This variance in system causes network overhead which leads to unfairness on the reduce input among different data nodes in the cluster. Because of the above problems, applications experience performance degradation due to shuffle phase of MapReduce applications. We develop a new novel algorithm; unlike previous systems our algorithm considers each node's capabilities as heuristics to decide a better available trade-off for the locality and fairness in the system. By comparing with the default Hadoop's partitioning algorithm and Leen partitioning algorithm: a). In case of 2 million key-value pairs to process, on the average our approach achieve better resource utilization by about 19%, and 9%, in that order; b). In case of 3 million key-value pairs to process, our approach achieve near optimal resource utilization by about 15%, and 7%, respectively.
Files in This Item
Go to Link
Appears in
Collections
서울 공과대학 > 서울 컴퓨터소프트웨어학부 > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Lee, Choon hwa photo

Lee, Choon hwa
COLLEGE OF ENGINEERING (SCHOOL OF COMPUTER SCIENCE)
Read more

Altmetrics

Total Views & Downloads

BROWSE