DQN 강화학습 기반 최대 효율 구동 가능한 최적 IPT 코일 턴 수 설계 연구
DC Field | Value | Language |
---|---|---|
dc.contributor.author | 장진혁 | - |
dc.contributor.author | 이은수 | - |
dc.date.accessioned | 2023-09-18T05:32:41Z | - |
dc.date.available | 2023-09-18T05:32:41Z | - |
dc.date.issued | 2023-08 | - |
dc.identifier.issn | 1229-2214 | - |
dc.identifier.issn | 2288-6281 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/115355 | - |
dc.description.abstract | This study proposes a method for finding the optimal number of turns in an IPT for maximum power efficiency by using a deep Q-learning network based on a reinforcement learning (RL) algorithm. Obtaining the optimal number of turns for a transmitter (Tx) and receiver (Rx) for satisfactory operation and maximum power efficiency is nearly impossible; thus, most Tx and Rx are normally wound until the coils occupy the cores. Moreover, iteratively simulating all the existing combinations of Tx and Rx coil windings to derive the maximum power efficiency will require a considerable amount of time. To shorten the computation time needed to determine the number of coil turns to get the highest power efficiency, the proposed method uses the RL algorithm to select the optimal number of coil turns with a high Q-value through the ε-greedy turn selection process. After a few neural network system episodes, the proposed algorithm can reach the expected maximum power efficiency after the simulation of only 20% of all the possible combinations. The proposed RL algorithm is evaluated through FEM simulation analysis, which shows that the optimal number of turns for various WPT cases with different loads can be determined rapidly. | - |
dc.format.extent | 8 | - |
dc.language | 한국어 | - |
dc.language.iso | KOR | - |
dc.publisher | 전력전자학회 | - |
dc.title | DQN 강화학습 기반 최대 효율 구동 가능한 최적 IPT 코일 턴 수 설계 연구 | - |
dc.title.alternative | Optimal Number of Turns Design of IPT for Maximum Power Efficiency based on Reinforcement Learning with DQN | - |
dc.type | Article | - |
dc.publisher.location | 대한민국 | - |
dc.identifier.doi | 10.6113/TKPE.2023.28.4.255 | - |
dc.identifier.bibliographicCitation | 전력전자학회 논문지, v.28, no.4, pp 255 - 262 | - |
dc.citation.title | 전력전자학회 논문지 | - |
dc.citation.volume | 28 | - |
dc.citation.number | 4 | - |
dc.citation.startPage | 255 | - |
dc.citation.endPage | 262 | - |
dc.identifier.kciid | ART002992259 | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | kci | - |
dc.subject.keywordAuthor | Reinforcement learning(RL) | - |
dc.subject.keywordAuthor | WPT (Wireless Power Transfer) | - |
dc.subject.keywordAuthor | Inductive Power Transfer(IPT) | - |
dc.subject.keywordAuthor | DQN(Deep Q-learning network) | - |
dc.subject.keywordAuthor | ε-greedy process | - |
dc.identifier.url | https://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE11490721 | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
55 Hanyangdeahak-ro, Sangnok-gu, Ansan, Gyeonggi-do, 15588, Korea+82-31-400-4269 sweetbrain@hanyang.ac.kr
COPYRIGHT © 2021 HANYANG UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.