Autonomous Control of Combat Unmanned Aerial Vehicles to Evade Surface-to-Air Missiles Using Deep Reinforcement Learning
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lee, Gyeong Taek | - |
dc.contributor.author | Kim, Chang Ouk | - |
dc.date.accessioned | 2024-03-20T13:30:25Z | - |
dc.date.available | 2024-03-20T13:30:25Z | - |
dc.date.issued | 2020-12 | - |
dc.identifier.issn | 2169-3536 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/gachon/handle/2020.sw.gachon/90761 | - |
dc.description.abstract | This paper proposes a new reinforcement learning approach for executing combat unmanned aerial vehicle (CUAV) missions. We consider missions with the following goals: guided missile avoidance, shortest-path flight and formation flight. For reinforcement learning, the representation of the current agent state is important. We propose a novel method of using the coordinates and angle of a CUAV to effectively represent its state. Furthermore, we develop a reinforcement learning algorithm with enhanced exploration through amplification of the imitation effect (AIE). This algorithm consists of self-imitation learning and random network distillation algorithms. We assert that these two algorithms complement each other and that combining them amplifies the imitation effect for exploration. Empirical results show that the proposed AIE approach is highly effective at finding a CUAV's shortest-flight path while avoiding enemy missiles. Test results confirm that with our method, a single CUAV reaches its target from its starting point 95% of the time and a squadron of four simultaneously operating CUAVs reaches the target 70% of the time. | - |
dc.format.extent | 13 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
dc.title | Autonomous Control of Combat Unmanned Aerial Vehicles to Evade Surface-to-Air Missiles Using Deep Reinforcement Learning | - |
dc.type | Article | - |
dc.identifier.wosid | 000604508700001 | - |
dc.identifier.doi | 10.1109/ACCESS.2020.3046284 | - |
dc.identifier.bibliographicCitation | IEEE ACCESS, v.8, pp 226724 - 226736 | - |
dc.description.isOpenAccess | Y | - |
dc.identifier.scopusid | 2-s2.0-85183485101 | - |
dc.citation.endPage | 226736 | - |
dc.citation.startPage | 226724 | - |
dc.citation.title | IEEE ACCESS | - |
dc.citation.volume | 8 | - |
dc.type.docType | Article | - |
dc.publisher.location | 미국 | - |
dc.subject.keywordAuthor | Missiles | - |
dc.subject.keywordAuthor | Reinforcement learning | - |
dc.subject.keywordAuthor | Games | - |
dc.subject.keywordAuthor | Unmanned aerial vehicles | - |
dc.subject.keywordAuthor | Mathematical model | - |
dc.subject.keywordAuthor | Licenses | - |
dc.subject.keywordAuthor | Task analysis | - |
dc.subject.keywordAuthor | Deep reinforcement learning | - |
dc.subject.keywordAuthor | combat unmanned aerial vehicle | - |
dc.subject.keywordAuthor | deep learning | - |
dc.subject.keywordAuthor | autonomous flight management system | - |
dc.subject.keywordAuthor | path planning | - |
dc.subject.keywordAuthor | exploration | - |
dc.subject.keywordPlus | UAV | - |
dc.subject.keywordPlus | DECISION | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalResearchArea | Telecommunications | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.relation.journalWebOfScienceCategory | Telecommunications | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
1342, Seongnam-daero, Sujeong-gu, Seongnam-si, Gyeonggi-do, Republic of Korea(13120)031-750-5114
COPYRIGHT 2020 Gachon University All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.