EctFormer: High-Imperceptibility Deep Image Steganography Based on Empirical Mode Decomposition
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Duan, Xintao | - |
dc.contributor.author | Li, Sen | - |
dc.contributor.author | Wang, Zhao | - |
dc.contributor.author | Wei, Bingxin | - |
dc.contributor.author | Nam, Haewoon | - |
dc.contributor.author | Qin, Chuan | - |
dc.date.accessioned | 2025-09-17T05:30:36Z | - |
dc.date.available | 2025-09-17T05:30:36Z | - |
dc.date.issued | 2025-08 | - |
dc.identifier.issn | 1051-8215 | - |
dc.identifier.issn | 1558-2205 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/126472 | - |
dc.description.abstract | Image steganography, a crucial technique for secure information transmission, faces the challenge of balancing embedding capacity with visual imperceptibility and security. Existing methods often struggle to maximize these metrics simultaneously, particularly when handling complex image details and achieving adaptive feature representation. To address this, we propose EctFormer, a novel deep steganography framework based on Image Hiding Empirical Mode Decomposition (IHEMD). EctFormer employs a compact autoencoder architecture with a key innovation: an integrated IHEMD module that adaptively decomposes images into physically meaningful intrinsic mode functions (IMFs) and residual components. This decomposition allows for superior feature representation and information embedding. Furthermore, we introduce an intrinsic mode loss function within a novel multi-image training strategy, achieving a remarkable embedding capacity of 96 bits per pixel. Experimental results on the DIV2K, COCO, and ImageNet datasets demonstrate EctFormer's superior performance. Our method significantly improves PSNR (exceeding 17.00 dB for single-image tasks and 11.00 dB for multi-image tasks) while maintaining high SSIM values (above 0.99). These results surpass current state-of-the-art methods, validating the efficacy of our IHEMD-based approach and the proposed training strategy. EctFormer provides a new effective paradigm for image steganography and enables high-capacity, high-security covert communication. The code is available at https://github.com/lisen1129/EctFormer. © 2025 Elsevier B.V., All rights reserved. | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | Institute of Electrical and Electronics Engineers Inc. | - |
dc.title | EctFormer: High-Imperceptibility Deep Image Steganography Based on Empirical Mode Decomposition | - |
dc.type | Article | - |
dc.publisher.location | 미국 | - |
dc.identifier.doi | 10.1109/TCSVT.2025.3603961 | - |
dc.identifier.scopusid | 2-s2.0-105014589760 | - |
dc.identifier.bibliographicCitation | IEEE Transactions on Circuits and Systems for Video Technology | - |
dc.citation.title | IEEE Transactions on Circuits and Systems for Video Technology | - |
dc.type.docType | Article in press | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.subject.keywordAuthor | Attention Mechanism | - |
dc.subject.keywordAuthor | Empirical Mode Decomposition | - |
dc.subject.keywordAuthor | Image Steganography | - |
dc.subject.keywordAuthor | Transformer | - |
dc.subject.keywordAuthor | Embeddings | - |
dc.subject.keywordAuthor | Empirical Mode Decomposition | - |
dc.subject.keywordAuthor | Steganography | - |
dc.subject.keywordAuthor | Visual Communication | - |
dc.subject.keywordAuthor | Attention Mechanisms | - |
dc.subject.keywordAuthor | Embedding Capacity | - |
dc.subject.keywordAuthor | Empirical Mode Decomposition | - |
dc.subject.keywordAuthor | Feature Representation | - |
dc.subject.keywordAuthor | Image Hiding | - |
dc.subject.keywordAuthor | Image Steganography | - |
dc.subject.keywordAuthor | Information Transmission | - |
dc.subject.keywordAuthor | Multi-images | - |
dc.subject.keywordAuthor | Training Strategy | - |
dc.subject.keywordAuthor | Transformer | - |
dc.subject.keywordAuthor | Image Enhancement | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
55 Hanyangdeahak-ro, Sangnok-gu, Ansan, Gyeonggi-do, 15588, Korea+82-31-400-4269 sweetbrain@hanyang.ac.kr
COPYRIGHT © 2021 HANYANG UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.