The ease of capturing, manipulating, distributing, and consuming digital media (e.g., images, audio, video, graphics, and text) has enabled new applications and brought a number of important security challenges to the forefront. These challenges have prompted significant research and development in the areas of digital watermarking, steganography, data hiding, forensics, deepfakes, media identification, biometrics, and encryption to protect owners’ rights, establish provenance and veracity of content, and to preserve privacy. Research results in these areas has been translated into new paradigms and applications for monetizing media while maintaining ownership rights, and new biometric and forensic identification techniques for novel methods for ensuring privacy. The Media Watermarking, Security, and Forensics Conference is a premier destination for disseminating high-quality, cutting-edge research in these areas. The conference provides an excellent venue for researchers and practitioners to present their innovative work as well as to keep abreast of the latest developments in watermarking, security, and forensics. Early results and fresh ideas are particularly encouraged and supported by the conference review format: only a structured abstract describing the work in progress and preliminary results is initially required and the full paper is requested just before the conference. A strong focus on how research results are applied by industry, in practice, also gives the conference its unique flavor.
In this article, we study a recently proposed method for improving empirical security of steganography in JPEG images in which the sender starts with an additive embedding scheme with symmetrical costs of ± 1 changes and then decreases the cost of one of these changes based on an image obtained by applying a deblocking (JPEG dequantization) algorithm to the cover JPEG. This approach provides rather significant gains in security at negligible embedding complexity overhead for a wide range of quality factors and across various embedding schemes. Challenging the original explanation of the inventors of this idea, which is based on interpreting the dequantized image as an estimate of the precover (uncompressed) image, we provide alternative arguments. The key observation and the main reason why this approach works is how the polarizations of individual DCT coefficients work together. By using a MiPOD model of content complexity of the uncompressed cover image, we show that the cost polarization technique decreases the chances of “bad” combinations of embedding changes that would likely be introduced by the original scheme with symmetric costs. This statement is quantified by computing the likelihood of the stego image w.r.t. the multivariate Gaussian precover distribution in DCT domain. Furthermore, it is shown that the cost polarization decreases spatial discontinuities between blocks (blockiness) in the stego image and enforces desirable correlations of embedding changes across blocks. To further prove the point, it is shown that in a source that adheres to the precover model, a simple Wiener filter can serve equally well as a deep-learning based deblocker
The ease of capturing, manipulating, distributing, and consuming digital media (e.g. images, audio, video, graphics, and text) has motivated new applications and raised a number of important security challenges to the forefront. These applications and challenges have prompted significant research and development activities in the areas of digital watermarking, steganography, data hiding, forensics, media identification, and encryption to protect the authenticity, security, and ownership of media objects. Research results in these areas have translated into new paradigms and applications to monetize media objects without violating their ownership rights. The Media Watermarking, Security, and Forensics conference is a premier destination for disseminating high-quality, cutting-edge research in these areas. The conference provides an excellent venue for researchers and practitioners to present their innovative work as well as to keep abreast with the latest developments in watermarking, security, and forensics. The technical program will also be complemented by keynote talks, panel sessions, and short demos involving both academic and industrial researchers/ practitioners. This strong focus on how research results are applied in practice by the industry gives the conference its unique flavor.
Many areas of forensics are moving away from the notion of classifying evidence simply as a match or non-match. Instead, some use score-based likelihood ratios (SLR) to quantify the similarity between two pieces of evidence, such as a fingerprint obtained from a crime scene and a fingerprint obtained from a suspect. We apply trace-anchored score-based likelihood ratios to the camera device identification problem. We use photo-response non-uniformity (PRNU) as a camera fingerprint and one minus the normalized correlation as a similarity score. We calculate trace-anchored SLRs for 10,000 images from seven camera devices from the BOSSbase image dataset. We include a comparison between our results the universal detector method.
A new rule for modulating costs in side-informed steganography is proposed. The modulation factors of costs are determined by the minimum perturbation of the precover to quantize to the desired stego value. This new rule is contrasted with the established way of weighting costs by the difference between the rounding errors to the cover and stego values. Experiments are used to demonstrate that the new rule improves security in ternary side-informed UNIWARD in JPEG domain. The new rule arises naturally as the correct cost modulation for JPEG side-informed steganography with the “trunc” quantizer used in many portable digital imaging devices.