Locality-Based Action-Poisoning Attack against the Continuous Control of an Autonomous Driving Model
DC Field | Value | Language |
---|---|---|
dc.contributor.author | An, Yoonsoo | - |
dc.contributor.author | Yang, Wonseok | - |
dc.contributor.author | Choi, Daeseon | - |
dc.date.accessioned | 2024-04-08T01:30:18Z | - |
dc.date.available | 2024-04-08T01:30:18Z | - |
dc.date.issued | 2024-02 | - |
dc.identifier.issn | 2227-9717 | - |
dc.identifier.issn | 2227-9717 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/ssu/handle/2018.sw.ssu/49403 | - |
dc.description.abstract | Various studies have been conducted on Multi-Agent Reinforcement Learning (MARL) to control multiple agents to drive effectively and safely in a simulation, demonstrating the applicability of MARL in autonomous driving. However, several studies have indicated that MARL is vulnerable to poisoning attacks. This study proposes a 'locality-based action-poisoning attack' against MARL-based continuous control systems. Each bird in a flock interacts with its neighbors to generate the collective behavior, which is implemented through rules in the Reynolds' flocking algorithm, where each individual maintains an appropriate distance from its neighbors and moves in a similar direction. We use this concept to propose an action-poisoning attack, based on the hypothesis that if an agent is performing significantly different behaviors from neighboring agents, it can disturb the driving stability of the entirety of the agents. We demonstrate that when a MARL-based continuous control system is trained in an environment where a single target agent performs an action that violates Reynolds' rules, the driving performance of all victim agents decreases, and the model can converge to a suboptimal policy. The proposed attack method can disrupt the training performance of the victim model by up to 97% compared to the original model in certain setting, when the attacker is allowed black-box access. | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | MDPI | - |
dc.title | Locality-Based Action-Poisoning Attack against the Continuous Control of an Autonomous Driving Model | - |
dc.type | Article | - |
dc.identifier.doi | 10.3390/pr12020314 | - |
dc.identifier.bibliographicCitation | PROCESSES, v.12, no.2 | - |
dc.identifier.wosid | 001172595700001 | - |
dc.identifier.scopusid | 2-s2.0-85187250571 | - |
dc.citation.number | 2 | - |
dc.citation.title | PROCESSES | - |
dc.citation.volume | 12 | - |
dc.identifier.url | https://www.mdpi.com/2227-9717/12/2/314 | - |
dc.publisher.location | 스위스 | - |
dc.type.docType | Article | - |
dc.description.isOpenAccess | Y | - |
dc.subject.keywordAuthor | reinforcement learinng | - |
dc.subject.keywordAuthor | multi-agent reinforcement learning | - |
dc.subject.keywordAuthor | AI security | - |
dc.subject.keywordAuthor | poisoning attack | - |
dc.subject.keywordAuthor | adversarial attack | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalWebOfScienceCategory | Engineering, Chemical | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
Soongsil University Library 369 Sangdo-Ro, Dongjak-Gu, Seoul, Korea (06978)02-820-0733
COPYRIGHT ⓒ SOONGSIL UNIVERSITY, ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.