Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

EctFormer: High-Imperceptibility Deep Image Steganography Based on Empirical Mode Decomposition

Authors
Duan, XintaoLi, SenWang, ZhaoWei, BingxinNam, HaewoonQin, Chuan
Issue Date
Aug-2025
Publisher
Institute of Electrical and Electronics Engineers Inc.
Keywords
Attention Mechanism; Empirical Mode Decomposition; Image Steganography; Transformer; Embeddings; Empirical Mode Decomposition; Steganography; Visual Communication; Attention Mechanisms; Embedding Capacity; Empirical Mode Decomposition; Feature Representation; Image Hiding; Image Steganography; Information Transmission; Multi-images; Training Strategy; Transformer; Image Enhancement
Citation
IEEE Transactions on Circuits and Systems for Video Technology
Indexed
SCIE
SCOPUS
Journal Title
IEEE Transactions on Circuits and Systems for Video Technology
URI
https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/126472
DOI
10.1109/TCSVT.2025.3603961
ISSN
1051-8215
1558-2205
Abstract
Image steganography, a crucial technique for secure information transmission, faces the challenge of balancing embedding capacity with visual imperceptibility and security. Existing methods often struggle to maximize these metrics simultaneously, particularly when handling complex image details and achieving adaptive feature representation. To address this, we propose EctFormer, a novel deep steganography framework based on Image Hiding Empirical Mode Decomposition (IHEMD). EctFormer employs a compact autoencoder architecture with a key innovation: an integrated IHEMD module that adaptively decomposes images into physically meaningful intrinsic mode functions (IMFs) and residual components. This decomposition allows for superior feature representation and information embedding. Furthermore, we introduce an intrinsic mode loss function within a novel multi-image training strategy, achieving a remarkable embedding capacity of 96 bits per pixel. Experimental results on the DIV2K, COCO, and ImageNet datasets demonstrate EctFormer's superior performance. Our method significantly improves PSNR (exceeding 17.00 dB for single-image tasks and 11.00 dB for multi-image tasks) while maintaining high SSIM values (above 0.99). These results surpass current state-of-the-art methods, validating the efficacy of our IHEMD-based approach and the proposed training strategy. EctFormer provides a new effective paradigm for image steganography and enables high-capacity, high-security covert communication. The code is available at https://github.com/lisen1129/EctFormer. © 2025 Elsevier B.V., All rights reserved.
Files in This Item
There are no files associated with this item.
Appears in
Collections
COLLEGE OF ENGINEERING SCIENCES > SCHOOL OF ELECTRICAL ENGINEERING > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Nam, Hae woon photo

Nam, Hae woon
ERICA 공학대학 (SCHOOL OF ELECTRICAL ENGINEERING)
Read more

Altmetrics

Total Views & Downloads

BROWSE