Federated Learning for Clinical Event Classification Using Vital Signs Data
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Rakhmiddin, Ruzaliev | - |
dc.contributor.author | Lee, KangYoon | - |
dc.date.accessioned | 2023-08-25T08:41:52Z | - |
dc.date.available | 2023-08-25T08:41:52Z | - |
dc.date.created | 2023-08-25 | - |
dc.date.issued | 2023-07 | - |
dc.identifier.issn | 2414-4088 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/gachon/handle/2020.sw.gachon/88853 | - |
dc.description.abstract | Accurate and timely diagnosis is a pillar of effective healthcare. However, the challenge lies in gathering extensive training data while maintaining patient privacy. This study introduces a novel approach using federated learning (FL) and a cross-device multimodal model for clinical event classification based on vital signs data. Our architecture employs FL to train several machine learning models including random forest, AdaBoost, and SGD ensemble models on vital signs data. The data were sourced from a diverse clientele at a Boston hospital (MIMIC-IV dataset). The FL structure trains directly on each client's device, ensuring no transfer of sensitive data and preserving patient privacy. The study demonstrates that FL offers a powerful tool for privacy-preserving clinical event classification, with our approach achieving an impressive accuracy of 98.9%. These findings highlight the significant potential of FL and cross-device ensemble technology in healthcare applications, especially in the context of handling large volumes of sensitive patient data. | - |
dc.language | 영어 | - |
dc.language.iso | en | - |
dc.publisher | MDPI | - |
dc.relation.isPartOf | MULTIMODAL TECHNOLOGIES AND INTERACTION | - |
dc.title | Federated Learning for Clinical Event Classification Using Vital Signs Data | - |
dc.type | Article | - |
dc.type.rims | ART | - |
dc.description.journalClass | 1 | - |
dc.identifier.wosid | 001038904900001 | - |
dc.identifier.doi | 10.3390/mti7070067 | - |
dc.identifier.bibliographicCitation | MULTIMODAL TECHNOLOGIES AND INTERACTION, v.7, no.7 | - |
dc.description.isOpenAccess | Y | - |
dc.identifier.scopusid | 2-s2.0-85166274251 | - |
dc.citation.title | MULTIMODAL TECHNOLOGIES AND INTERACTION | - |
dc.citation.volume | 7 | - |
dc.citation.number | 7 | - |
dc.contributor.affiliatedAuthor | Rakhmiddin, Ruzaliev | - |
dc.contributor.affiliatedAuthor | Lee, KangYoon | - |
dc.type.docType | Article | - |
dc.subject.keywordAuthor | federated learning | - |
dc.subject.keywordAuthor | clinical events | - |
dc.subject.keywordAuthor | vital signs | - |
dc.subject.keywordAuthor | classification | - |
dc.subject.keywordAuthor | multimodal | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Cybernetics | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
dc.description.journalRegisteredClass | scopus | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
1342, Seongnam-daero, Sujeong-gu, Seongnam-si, Gyeonggi-do, Republic of Korea(13120)031-750-5114
COPYRIGHT 2020 Gachon University All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.