Back to articles
Proceedings Paper
Volume: 38 | Article ID: MWSF-309
Image
Unveiling Hidden Model Fingerprints in API-protected LLMs
  DOI :  10.2352/EI.2026.38.4.MWSF-309  Published OnlineMarch 2026
Abstract
Abstract

Recent advances confirm that large language models (LLMs) can achieve state-of-the-art performance across various tasks. However, due to the resource-intensive nature of training LLMs from scratch, it is urgent and crucial to protect the intellectual property of LLMs against infringement. This has motivated the authors in this paper to propose a novel black-box fingerprinting technique for LLMs. We firstly demonstrate that the outputs of LLMs span a unique vector space associated with each model. We model the problem of fingerprint authentication as the task of evaluating the similarity between the space of the victim model and the space of the suspect model. To tackle with this problem, we introduce two solutions: the first determines whether suspect outputs lie within the victim’s subspace, enabling fast infringement detection; the second reconstructs a joint subspace to detect models modified via parameter-efficient fine-tuning (PEFT). Experiments indicate that the proposed method achieves superior performance in fingerprint verification and robustness against the PEFT attacks. This work reveals inherent characteristics of LLMs and provides a promising solution for protecting LLMs, ensuring efficiency, generality and practicality.

Subject Areas :
Views 59
Downloads 10
 articleview.views 59
 articleview.downloads 10
  Cite this article 

Zhiguang Yang, Hanzhou Wu, "Unveiling Hidden Model Fingerprints in API-protected LLMsin Electronic Imaging,  2026,  pp 309-1 - 309-9,  https://doi.org/10.2352/EI.2026.38.4.MWSF-309

 Copy citation
  Copyright statement 
Copyright ©2026 Society for Imaging Science and Technology 2026
ei
Electronic Imaging
2470-1173
2470-1173
Society for Imaging Science and Technology
IS&T 7003 Kilworth Lane, Springfield, VA 22151 USA