Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Massive parallelization technique for random linear network coding

Authors
Choi, S.-M.Park, J.-S.
Issue Date
2014
Publisher
IEEE Computer Society
Keywords
GPGPU; Network Coding; Parallel algorithm
Citation
2014 International Conference on Big Data and Smart Computing, BIGCOMP 2014, pp.296 - 299
Journal Title
2014 International Conference on Big Data and Smart Computing, BIGCOMP 2014
Start Page
296
End Page
299
URI
https://scholarworks.bwise.kr/hongik/handle/2020.sw.hongik/16423
DOI
10.1109/BIGCOMP.2014.6741456
ISSN
0000-0000
Abstract
Random linear network coding (RLNC) has gain popularity as a useful performance-enhancing tool for communications networks. In this paper, we propose a RLNC parallel implementation technique for General Purpose Graphical Processing Units (GPGPUs.) Recently, GPGPU technology has paved the way for parallelizing RLNC; however, current state-of-the-art parallelization techniques for RLNC are unable to fully utilize GPGPU technology in many occasions. Addressing this problem, we propose a new RLNC parallelization technique that can fully exploit GPGPU architectures. Our parallel method shows over 4 times higher throughput compared to existing state-of-the-art parallel RLNC decoding schemes for GPGPU and 20 times higher throughput over the state-of-the-art serial RLNC decoders. © 2014 IEEE.
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Engineering > Computer Engineering > Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Park, Joon Sang photo

Park, Joon Sang
Engineering (Department of Computer Engineering)
Read more

Altmetrics

Total Views & Downloads

BROWSE