Regular
A/B WatermarkingAccusation
Contrastive learning
Deepfake generationDiffusion model
Environments with Lossy Feedback Channels
Face swappingFingerprint
GAN inversion
Intellectual property protection
Large language modelsLarge language model
Model watermarkingMinimum-distortion embedding
Natural language processing
Program search
SteganographySource tracingSecurity
TransformerTask migrationTraitor Tracing
Uniform Switching Identities (USWIDs)
5.5 Sigma Threshold
 Filters
Month and year
 
  51  7
Image
Pages 307-1 - 307-14,  ©2026 Society for Imaging Science and Technology 2026
Volume 38
Issue 4
Abstract

In this paper, we make the first attempt towards defining cost function of steganography with large language models (LLMs), which is totally different from previous works that rely heavily on expert knowledge or require large-scale datasets for cost learning. To achieve this goal, a two-stage strategy combining LLM-guided program synthesis with evolutionary search is applied in the proposed method. In the first stage, a certain number of cost functions in the form of computer programs are synthesized from LLM responses to structured prompts. These cost functions are then evaluated with pretrained steganalysis models so that candidate cost functions suited to steganography can be collected. In the second stage, by retraining a steganalysis model for each candidate cost function, the optimal cost function(s) can be determined according to the detection accuracy. This two-stage strategy is performed by an iterative fashion so that the best cost function can be collected at the last iteration. Experiments show that the proposed method enables LLMs to design new cost functions of steganography that significantly outperform existing works in terms of resisting steganalysis tools, which verifies the superiority of the proposed method. To the best knowledge of the authors, this is the first work applying LLMs to the design of advanced cost function of steganography, which presents a novel perspective for steganography design and may shed light on further research.

Digital Library: EI
Published Online: March  2026
  48  10
Image
Pages 308-1 - 308-9,  ©2026 Society for Imaging Science and Technology 2026
Volume 38
Issue 4
Abstract

Protecting the intellectual property of natural language encoders faces a critical challenge: hidden watermarks are easy to be erased when models are fine-tuned to adapt to downstream applications, known as “task migration”. To deal with this problem, we introduce a Task Migration Resistant Watermarking (TMRW) framework to strengthen the watermark robustness against task migration. The proposed method uses a dual-objective fine-tuning strategy. During the process of watermark embedding, a specifically designed watermark loss function is introduced to compel the encoder to map a set of trigger inputs into a compact cluster in the embedding space. To counteract the potential performance degradation introduced by this process, an augmented contrastive loss is simultaneously optimized to preserve the encoder’s general semantic representation abilities. This dual-objective strategy is further enhanced by a novel trigger corpus crafting method that ensures the watermark’s stealthiness. Experimental results show that the proposed method enables the embedding of a robust watermark that significantly outperforms existing techniques in resisting erasure from task migration. This work well deals with the challenge of encoder watermark’s durability against task migration, which provides a novel and practical framework for intellectual property protection in natural language processing systems.

Digital Library: EI
Published Online: March  2026
  62  11
Image
Pages 309-1 - 309-9,  ©2026 Society for Imaging Science and Technology 2026
Volume 38
Issue 4
Abstract

Recent advances confirm that large language models (LLMs) can achieve state-of-the-art performance across various tasks. However, due to the resource-intensive nature of training LLMs from scratch, it is urgent and crucial to protect the intellectual property of LLMs against infringement. This has motivated the authors in this paper to propose a novel black-box fingerprinting technique for LLMs. We firstly demonstrate that the outputs of LLMs span a unique vector space associated with each model. We model the problem of fingerprint authentication as the task of evaluating the similarity between the space of the victim model and the space of the suspect model. To tackle with this problem, we introduce two solutions: the first determines whether suspect outputs lie within the victim’s subspace, enabling fast infringement detection; the second reconstructs a joint subspace to detect models modified via parameter-efficient fine-tuning (PEFT). Experiments indicate that the proposed method achieves superior performance in fingerprint verification and robustness against the PEFT attacks. This work reveals inherent characteristics of LLMs and provides a promising solution for protecting LLMs, ensuring efficiency, generality and practicality.

Digital Library: EI
Published Online: March  2026
  52  11
Image
Pages 310-1 - 310-7,  ©2026 Society for Imaging Science and Technology 2026
Volume 38
Issue 4
Abstract

This paper proposes Uniform Switching Identities (USWIDs) as a lightweight, collusion-resistant identity scheme designed for computationally constrained environments. USWIDs use uniformly structured identities and simplified penalization logic while retaining effectiveness comparable to classical Tardos-based methods. Through simulations, we show U-SWIDs maintain robustness even under lossy feedback conditions - common in forensic watermarking scenarios. Compared to Approximated Tardos Switching Identities (AT-SWIDs), U-SWIDs offer improved scalability, ease of generation, and operational simplicity without compromising traceability. The findings suggest U-SWIDs are a viable alternative for practical traitor tracing systems, especially where delivery, derivation cost, and resilience to partial symbol loss are critical deployment factors.

Digital Library: EI
Published Online: March  2026
  60  16
Image
Pages 312-1 - 312-7,  ©2026 Society for Imaging Science and Technology 2026
Volume 38
Issue 4
Abstract

Face swapping, or deepfake generation, remains a challenging task that requires balancing identity preservation, attribute consistency, and photorealistic realism. We propose a novel training-free, three-stage face swapping framework that improves realism by explicitly aligning illumination and skin appearance prior to diffusion-based synthesis. Our approach refines photometric consistency and skin tone while preserving facial structure and integrates seamlessly with an off-the-shelf diffusion face swapping model. Experiments on the CelebAMask-HQ dataset demonstrate significant improvements in both visual realism and attribute preservation, achieving an FID score of 7.16 compared to the baseline. The proposed method provides an efficient and robust solution for realistic face swapping under varying illumination and appearance conditions without additional model training.

Digital Library: EI
Published Online: March  2026

Keywords

[object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object]