Knowing the history of global processing applied to an image can be very important for the forensic analyst to correctly establish the image pedigree, trustworthiness, and integrity. Global edits have been proposed in the past for "laundering" manipulated content because they can negatively affect the reliability of many forensic techniques. In this paper, we focus on the more difficult and less addressed case when the processed image is JPEG compressed. First, a bank of binary linear classifiers with rich media models are built to distinguish between unprocessed images and images subjected to a specific processing class. For better scalability, the detector is not built in the rich feature space but in the space of projections of features on the weight vectors of the linear classifiers. This decreases the computational complexity of the detector and, most importantly, allows estimation of the distribution of the projections by fitting a mutlivariate Gaussian model to each processing class to construct the final classifier as a maximum-likelihood detector. Well-fitting analytic models permit a more rigorous construction of the detector unachievable in the original high-dimensional rich feature space. Experiments on grayscale as well as color images with a range of JPEG quality factors and four processing classes are used to show the merit of the proposed methodology.
Mehdi Boroumand, Jessica Fridrich, "Scalable Processing History Detector for JPEG Images" in Proc. IS&T Int’l. Symp. on Electronic Imaging: Media Watermarking, Security, and Forensics, 2017, pp 128 - 137, https://doi.org/10.2352/ISSN.2470-1173.2017.7.MWSF-336