Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Natural-Language-Driven Multimodal Representation Learning for Audio-Visual Scene-Aware Dialog Systemopen access

Authors
Heo, YoonseokKang, SangwooSeo, Jungyun
Issue Date
Sep-2023
Publisher
MDPI
Keywords
multimodal deep learning; audio-visual scene-aware dialog system; event keyword driven multimodal representation learning
Citation
SENSORS, v.23, no.18
Journal Title
SENSORS
Volume
23
Number
18
URI
https://scholarworks.bwise.kr/gachon/handle/2020.sw.gachon/89460
DOI
10.3390/s23187875
ISSN
1424-8220
Abstract
With the development of multimedia systems in wireless environments, the rising need for artificial intelligence is to design a system that can properly communicate with humans with a comprehensive understanding of various types of information in a human-like manner. Therefore, this paper addresses an audio-visual scene-aware dialog system that can communicate with users about audio-visual scenes. It is essential to understand not only visual and textual information but also audio information in a comprehensive way. Despite the substantial progress in multimodal representation learning with language and visual modalities, there are still two caveats: ineffective use of auditory information and the lack of interpretability of the deep learning systems' reasoning. To address these issues, we propose a novel audio-visual scene-aware dialog system that utilizes a set of explicit information from each modality as a form of natural language, which can be fused into a language model in a natural way. It leverages a transformer-based decoder to generate a coherent and correct response based on multimodal knowledge in a multitask learning setting. In addition, we also address the way of interpreting the model with a response-driven temporal moment localization method to verify how the system generates the response. The system itself provides the user with the evidence referred to in the system response process as a form of the timestamp of the scene. We show the superiority of the proposed model in all quantitative and qualitative measurements compared to the baseline. In particular, the proposed model achieved robust performance even in environments using all three modalities, including audio. We also conducted extensive experiments to investigate the proposed model. In addition, we obtained state-of-the-art performance in the system response reasoning task.
Files in This Item
There are no files associated with this item.
Appears in
Collections
IT융합대학 > 소프트웨어학과 > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetrics

Total Views & Downloads

BROWSE