Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Locality-Based Action-Poisoning Attack against the Continuous Control of an Autonomous Driving Model

Full metadata record
DC Field Value Language
dc.contributor.authorAn, Yoonsoo-
dc.contributor.authorYang, Wonseok-
dc.contributor.authorChoi, Daeseon-
dc.date.accessioned2024-04-08T01:30:18Z-
dc.date.available2024-04-08T01:30:18Z-
dc.date.issued2024-02-
dc.identifier.issn2227-9717-
dc.identifier.issn2227-9717-
dc.identifier.urihttps://scholarworks.bwise.kr/ssu/handle/2018.sw.ssu/49403-
dc.description.abstractVarious studies have been conducted on Multi-Agent Reinforcement Learning (MARL) to control multiple agents to drive effectively and safely in a simulation, demonstrating the applicability of MARL in autonomous driving. However, several studies have indicated that MARL is vulnerable to poisoning attacks. This study proposes a 'locality-based action-poisoning attack' against MARL-based continuous control systems. Each bird in a flock interacts with its neighbors to generate the collective behavior, which is implemented through rules in the Reynolds' flocking algorithm, where each individual maintains an appropriate distance from its neighbors and moves in a similar direction. We use this concept to propose an action-poisoning attack, based on the hypothesis that if an agent is performing significantly different behaviors from neighboring agents, it can disturb the driving stability of the entirety of the agents. We demonstrate that when a MARL-based continuous control system is trained in an environment where a single target agent performs an action that violates Reynolds' rules, the driving performance of all victim agents decreases, and the model can converge to a suboptimal policy. The proposed attack method can disrupt the training performance of the victim model by up to 97% compared to the original model in certain setting, when the attacker is allowed black-box access.-
dc.language영어-
dc.language.isoENG-
dc.publisherMDPI-
dc.titleLocality-Based Action-Poisoning Attack against the Continuous Control of an Autonomous Driving Model-
dc.typeArticle-
dc.identifier.doi10.3390/pr12020314-
dc.identifier.bibliographicCitationPROCESSES, v.12, no.2-
dc.identifier.wosid001172595700001-
dc.identifier.scopusid2-s2.0-85187250571-
dc.citation.number2-
dc.citation.titlePROCESSES-
dc.citation.volume12-
dc.identifier.urlhttps://www.mdpi.com/2227-9717/12/2/314-
dc.publisher.location스위스-
dc.type.docTypeArticle-
dc.description.isOpenAccessY-
dc.subject.keywordAuthorreinforcement learinng-
dc.subject.keywordAuthormulti-agent reinforcement learning-
dc.subject.keywordAuthorAI security-
dc.subject.keywordAuthorpoisoning attack-
dc.subject.keywordAuthoradversarial attack-
dc.relation.journalResearchAreaEngineering-
dc.relation.journalWebOfScienceCategoryEngineering, Chemical-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
Files in This Item
Go to Link
Appears in
Collections
College of Information Technology > School of Software > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Choi, Daeseon photo

Choi, Daeseon
College of Information Technology (School of Software)
Read more

Altmetrics

Total Views & Downloads

BROWSE