Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

MSV: Contribution of Modalities based on the Shapley Value

Authors
Jeon, JangyeongKim, JungeunPark, JinwooKim, Junyeong
Issue Date
Jan-2024
Publisher
Institute of Electrical and Electronics Engineers Inc.
Keywords
Multi-modal; Shapley Value; Visual Commonsense Generation
Citation
Digest of Technical Papers - IEEE International Conference on Consumer Electronics, v.2024 IEEE
Journal Title
Digest of Technical Papers - IEEE International Conference on Consumer Electronics
Volume
2024 IEEE
URI
https://scholarworks.bwise.kr/cau/handle/2019.sw.cau/73041
DOI
10.1109/ICCE59016.2024.10444313
ISSN
0747-668X
Abstract
Recently, with the remarkable development of deep learning, more complex tasks caused by real-world applications have been proposed to shift from single-modality learning to multiple-modality comprehension. This also means that the need for models capable of addressing comprehensive information from multi-modal datasets has increased. In multimodal tasks, proper interaction and fusion between different modalities amongst language, vision, sensory, and text play an important role in accurate predictions and identification. Therefore, detecting flaw led by the respective modalities when combining all modalities is of utmost importance. However, the complex, opaque and black-box nature of the model makes it challenging to understand the model's working and the impact of individual modalities, especially in complicated multimodal tasks. In addressing this issue, we directly employed the method presented in previous works and effectively applied it to the Visual Commonsense Generation task to quantify the contribution of different modalities. In this paper, we introduce the Contribution of Modalities based on the Shapley Value score, a metric designed to measure the marginal contribution of each modality. Drawing inspiration from previous studies that utilized the Shapley value in modality, we extend its application to the 'Visual Commonsense Generation' task. In experiments conducted on three modal tasks, our score offers enhanced interpretability for the multi-modal model. © 2024 IEEE.
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Software > Department of Artificial Intelligence > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Kim, Junyeong photo

Kim, Junyeong
소프트웨어대학 (AI학과)
Read more

Altmetrics

Total Views & Downloads

BROWSE