Robust orthogonal matrix factorization for efficient subspace learning
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kim, Eunwoo | - |
dc.contributor.author | Oh, Songhwai | - |
dc.date.accessioned | 2021-06-18T09:40:54Z | - |
dc.date.available | 2021-06-18T09:40:54Z | - |
dc.date.issued | 2015-11 | - |
dc.identifier.issn | 0925-2312 | - |
dc.identifier.issn | 1872-8286 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/cau/handle/2019.sw.cau/45722 | - |
dc.description.abstract | Low-rank matrix factorization plays an important role in the areas of pattern recognition, computer vision, and machine learning. Recently, a new family of methods, such as l(1)-norm minimization and robust PCA, has been proposed for low-rank subspace analysis problems and has shown to be robust against outliers and missing data. But these methods suffer from heavy computation loads and can fail to find a solution when highly corrupted data are presented. In this paper, a robust orthogonal matrix approximation method using fixed-rank factorization is proposed. The proposed method finds a robust solution efficiently using orthogonality and smoothness constraints. The proposed method is also extended to handle the rank uncertainty issue by a rank estimation strategy for practical real-world problems. The proposed method is applied to a number of low-rank matrix approximation problems and experimental results show that the proposed method is highly accurate, fast, and efficient compared to the existing methods. (C) 2015 Elsevier B.V. All rights reserved. | - |
dc.format.extent | 12 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | ELSEVIER SCIENCE BV | - |
dc.title | Robust orthogonal matrix factorization for efficient subspace learning | - |
dc.type | Article | - |
dc.identifier.doi | 10.1016/j.neucom.2015.04.074 | - |
dc.identifier.bibliographicCitation | NEUROCOMPUTING, v.167, pp 218 - 229 | - |
dc.description.isOpenAccess | N | - |
dc.identifier.wosid | 000358808500024 | - |
dc.identifier.scopusid | 2-s2.0-84952630186 | - |
dc.citation.endPage | 229 | - |
dc.citation.startPage | 218 | - |
dc.citation.title | NEUROCOMPUTING | - |
dc.citation.volume | 167 | - |
dc.type.docType | Article | - |
dc.publisher.location | 네델란드 | - |
dc.subject.keywordAuthor | Low-rank matrix factorization | - |
dc.subject.keywordAuthor | l(1)-norm | - |
dc.subject.keywordAuthor | Subspace learning | - |
dc.subject.keywordAuthor | Augmented Lagrangian method | - |
dc.subject.keywordAuthor | Rank estimation | - |
dc.subject.keywordPlus | PRINCIPAL COMPONENT ANALYSIS | - |
dc.subject.keywordPlus | REGULARIZATION | - |
dc.subject.keywordPlus | APPROXIMATIONS | - |
dc.subject.keywordPlus | ALGORITHMS | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
84, Heukseok-ro, Dongjak-gu, Seoul, Republic of Korea (06974)02-820-6194
COPYRIGHT 2019 Chung-Ang University All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.