Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Efficient Difficulty Level Balancing in Match-3 Puzzle Games: A Comparative Study of Proximal Policy Optimization and Soft Actor-Critic Algorithms

Full metadata record
DC Field Value Language
dc.contributor.authorKim, Byounggwon-
dc.contributor.authorKim, Jungyoon-
dc.date.accessioned2023-12-15T15:07:43Z-
dc.date.available2023-12-15T15:07:43Z-
dc.date.issued2023-11-
dc.identifier.issn2079-9292-
dc.identifier.issn2079-9292-
dc.identifier.urihttps://scholarworks.bwise.kr/gachon/handle/2020.sw.gachon/89498-
dc.description.abstractMatch-3 puzzle games have garnered significant popularity across all age groups due to their simplicity, non-violent nature, and concise gameplay. However, the development of captivating and well-balanced stages in match-3 puzzle games remains a challenging task for game developers. This study aims to identify the optimal algorithm for reinforcement learning to streamline the level balancing verification process in match-3 games by comparison with Soft Actor-Critic (SAC) and Proximal Policy Optimization (PPO) algorithms. By training the agent with these two algorithms, the paper investigated which approach yields more efficient and effective difficulty level balancing test results. After the comparative analysis of cumulative rewards and entropy, the findings illustrate that the SAC algorithm is the optimal choice for creating an efficient agent capable of handling difficulty level balancing for stages in a match-3 puzzle game. This is because the superior learning performance and higher stability demonstrated by the SAC algorithm are more important in terms of stage difficulty balancing in match-3 gameplay. This study expects to contribute to the development of improved level balancing techniques in match-3 puzzle games besides enhancing the overall gaming experience for players.-
dc.language영어-
dc.language.isoENG-
dc.publisherMDPI-
dc.titleEfficient Difficulty Level Balancing in Match-3 Puzzle Games: A Comparative Study of Proximal Policy Optimization and Soft Actor-Critic Algorithms-
dc.typeArticle-
dc.identifier.wosid001099495600001-
dc.identifier.doi10.3390/electronics12214456-
dc.identifier.bibliographicCitationELECTRONICS, v.12, no.21-
dc.description.isOpenAccessY-
dc.identifier.scopusid2-s2.0-85176321176-
dc.citation.titleELECTRONICS-
dc.citation.volume12-
dc.citation.number21-
dc.type.docTypeArticle-
dc.publisher.location스위스-
dc.subject.keywordAuthormatch-3 puzzle game-
dc.subject.keywordAuthorbalancing test-
dc.subject.keywordAuthorreinforcement learning-
dc.subject.keywordAuthorPPO-
dc.subject.keywordAuthorSAC-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalResearchAreaEngineering-
dc.relation.journalResearchAreaPhysics-
dc.relation.journalWebOfScienceCategoryComputer Science, Information Systems-
dc.relation.journalWebOfScienceCategoryEngineering, Electrical & Electronic-
dc.relation.journalWebOfScienceCategoryPhysics, Applied-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
Files in This Item
There are no files associated with this item.
Appears in
Collections
ETC > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Kim, Jung Yoon photo

Kim, Jung Yoon
College of IT Convergence (Department of Game Media)
Read more

Altmetrics

Total Views & Downloads

BROWSE