Development of an AI framework using neural process continuous reinforcement learning to optimize highly volatile financial portfolios
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kang, Martin | - |
dc.contributor.author | Templeton, Gary F. | - |
dc.contributor.author | Kwak, Dong-Heon | - |
dc.contributor.author | Um, Sungyong | - |
dc.date.accessioned | 2024-07-10T07:30:21Z | - |
dc.date.available | 2024-07-10T07:30:21Z | - |
dc.date.issued | 2024-09 | - |
dc.identifier.issn | 0950-7051 | - |
dc.identifier.issn | 1872-7409 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/119849 | - |
dc.description.abstract | High volatility presents considerable challenges in the optimization of financial portfolio assets. This study develops and explores model-based reinforcement learning (MBRL) in this context. Existing literature suggests that while model-free approach offers certain computational advantages, it frequently fails to encapsulate the nature of highly dynamic capital markets. This limitation is due to an insufficient consideration of the interactions between agents and environmental states within the reinforcement learning framework. Conversely, MBRL encounters inaccuracies representing stochastically evolving states typical of volatile capital markets. To address these limitations, we introduce an innovative AI framework in the MBRL domain by integrating attentive neural processes with continuous-time MBRL. This novel approach, termed Neural Process Continuous Reinforcement Learning (NPCRL), is posited to enhance the ability of MBRL to adapt to volatile fluctuations in capital markets. The effectiveness of NPCRL is empirically evaluated through a series of experiments using three important performance indicators of financial portfolios: returns, risk, and drawdown recovery. The results demonstrate that NPCRL surpasses other methods in achieving a balanced trade-off between long-term returns and risk management. This study advances our understanding of machine learning development by suggesting methods that are more proficient at capturing and adapting in volatile training environments. © 2024 | - |
dc.format.extent | 13 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | Elsevier B.V. | - |
dc.title | Development of an AI framework using neural process continuous reinforcement learning to optimize highly volatile financial portfolios | - |
dc.type | Article | - |
dc.publisher.location | 네델란드 | - |
dc.identifier.doi | 10.1016/j.knosys.2024.112017 | - |
dc.identifier.scopusid | 2-s2.0-85197091458 | - |
dc.identifier.wosid | 001264110100001 | - |
dc.identifier.bibliographicCitation | Knowledge-Based Systems, v.300, pp 1 - 13 | - |
dc.citation.title | Knowledge-Based Systems | - |
dc.citation.volume | 300 | - |
dc.citation.startPage | 1 | - |
dc.citation.endPage | 13 | - |
dc.type.docType | Article | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
dc.subject.keywordPlus | IDIOSYNCRATIC VOLATILITY | - |
dc.subject.keywordPlus | GO | - |
dc.subject.keywordPlus | GAME | - |
dc.subject.keywordAuthor | Machine learning | - |
dc.subject.keywordAuthor | Model-free reinforcement learning | - |
dc.subject.keywordAuthor | Neural network | - |
dc.subject.keywordAuthor | Portfolio optimization | - |
dc.identifier.url | https://www.sciencedirect.com/science/article/pii/S0950705124006518?via%3Dihub | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
55 Hanyangdeahak-ro, Sangnok-gu, Ansan, Gyeonggi-do, 15588, Korea+82-31-400-4269 sweetbrain@hanyang.ac.kr
COPYRIGHT © 2021 HANYANG UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.