Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Proposal of Deep Learning Systems Testing Framework Using Generative Large Language Models생성형 대형 언어 모델을 사용한 딥러닝 시스템 테스트 프레임워크 제안

Other Titles
생성형 대형 언어 모델을 사용한 딥러닝 시스템 테스트 프레임워크 제안
Authors
Lee,JoonwooJu,HansaeLee,Scott Uk-Jin
Issue Date
Jul-2024
Publisher
한국컴퓨터정보학회
Keywords
대형 언어 모델(Large Language Model); 적대적 예제 생성; (Adversarial Input Generation); 딥러닝 테스팅 (Deep Learning Testing)
Citation
2024년도 한국컴퓨터정보학회 하계학술대회 논문집, v.32, no.2, pp 55 - 56
Pages
2
Indexed
OTHER
Journal Title
2024년도 한국컴퓨터정보학회 하계학술대회 논문집
Volume
32
Number
2
Start Page
55
End Page
56
URI
https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/120784
Abstract
Ensuring the robustness and reliability of deep learning systems is essential as they are used in critical applications like autonomous driving, medical diagnostics, and financial forecasting. Traditional testing methods often miss rare or adversarial scenarios that can cause failures. This proposal leverages generative large language models (LLMs), such as GPT-4, to generate adversarial inputs for testing deep learning systems. LLMs can produce diverse and contextually relevant inputs, enhancing the detection of vulnerabilities and improving the robustness of these models. The research aims to develop an LLM-based adversarial input generation framework, evaluate its effectiveness, compare it with traditional methods, and assess improvements in system robustness. This approach promises to advance AI robustness testing and ensure deep learning systems are more resilient in real-world applications.
Files in This Item
Go to Link
Appears in
Collections
COLLEGE OF COMPUTING > ERICA 컴퓨터학부 > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Lee, Scott Uk Jin photo

Lee, Scott Uk Jin
ERICA 소프트웨어융합대학 (ERICA 컴퓨터학부)
Read more

Altmetrics

Total Views & Downloads

BROWSE