Regular
Covert Communication
Deep LearningDiscrete Cosine Transform (DCT)DeepFakeDetection
Efficiency
Forensic ScienceFace Forgery
Handwriting AnalysisH.264
Image Forensics
Large Language Model
Multimedia ForensicsManuscript InvestigationMammogramsMotion VectorsMachine LearningMedical Imaging
Optical Flow
Prompting Steganography
Radiology
VideoVideo Deepfake Detection
Writer Identification
 Filters
Month and year
 
  8  5
Image
Pages 333-1 - 333-6,  © 2024, Society for Imaging Science and Technology 2024
Volume 36
Issue 4
Abstract

A new algorithm for the detection of deepfakes in digital videos is presented. The I-frames were extracted in order to provide faster computation and analysis than approaches described in the literature. To identify the discriminating regions within individual video frames, the entire frame, background, face, eyes, nose, mouth, and face frame were analyzed separately. From the Discrete Cosine Transform (DCT), the β components were extracted from the AC coefficients and used as input to standard classifiers. Experimental results show that the eye and mouth regions are those most discriminative and able to determine the nature of the video under analysis.

Digital Library: EI
Published Online: January  2024
  31  8
Image
Pages 335-1 - 335-9,  This work is licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. 2024
Volume 36
Issue 4
Abstract

Video DeepFakes are fake media created with Deep Learning (DL) that manipulate a person’s expression or identity. Most current DeepFake detection methods analyze each frame independently, ignoring inconsistencies and unnatural movements between frames. Some newer methods employ optical flow models to capture this temporal aspect, but they are computationally expensive. In contrast, we propose using the related but often ignored Motion Vectors (MVs) and Information Masks (IMs) from the H.264 video codec, to detect temporal inconsistencies in DeepFakes. Our experiments show that this approach is effective and has minimal computational costs, compared with per-frame RGB-only methods. This could lead to new, real-time temporally-aware DeepFake detection methods for video calls and streaming.

Digital Library: EI
Published Online: January  2024
  344  58
Image
Pages 338-1 - 338-11,  © 2024, Society for Imaging Science and Technology 2024
Volume 36
Issue 4
Abstract

Recent studies show that scaling pre-trained language models can lead to a significantly improved model capacity on downstream tasks, resulting in a new research direction called large language models (LLMs). A remarkable application of LLMs is ChatGPT, which is a powerful large language model capable of generating human-like text based on context and past conversations. It is demonstrated that LLMs have impressive skills in reasoning, especially when using prompting strategies. In this paper, we explore the possibility of applying LLMs to the field of steganography, which is referred to as the art of hiding secret data into an innocent cover for covert communication. Our purpose is not to combine an LLM into an already designed steganographic system to boost the performance, which follows the conventional framework of steganography. Instead, we expect that, through prompting, an LLM can realize steganography by itself, which is defined as prompting steganography and may be a new paradigm of steganography. We show that, by reasoning, an LLM can embed secret data into a cover, and extract secret data from a stego, with an error rate. This error rate, however, can be reduced by optimizing the prompt, which may shed light on further research.

Digital Library: EI
Published Online: January  2024
  11  2
Image
Pages 341-1 - 341-6,  © 2024, Society for Imaging Science and Technology 2024
Volume 36
Issue 4
Abstract

Forensic handwriting examination is a branch of Forensic Science that aims to examine handwritten documents in order to properly define or hypothesize the manuscript’s author. These analysis involves comparing two or more (digitized) documents through a comprehensive comparison of intrinsic local and global features. If a correlation exists and specific best practices are satisfied, then it will be possible to affirm that the documents under analysis were written by the same individual. The need to create sophisticated tools capable of extracting and comparing significant features has led to the development of cutting-edge software with almost entirely automated processes, improving the forensic examination of handwriting and achieving increasingly objective evaluations. This is made possible by algorithmic solutions based on purely mathematical concepts. Machine Learning and Deep Learning models trained with specific datasets could turn out to be the key elements to best solve the task at hand. In this paper, we proposed a new and challenging dataset consisting of two subsets: the first consists of 21 documents written either by the classic “pen and paper” approach (and later digitized) and directly acquired on common devices such as tablets; the second consists of 362 handwritten manuscripts by 124 different people, acquired following a specific pipeline. Our study pioneered a comparison between traditionally handwritten documents and those produced with digital tools (e.g., tablets). Preliminary results on the proposed datasets show that 90% classification accuracy can be achieved on the first subset (documents written on both paper and pen and later digitized and on tablets) and 96% on the second portion of the data. The datasets are available at https://iplab.dmi.unict.it/mfs/forensichandwriting-analysis/novel-dataset-2023/.

Digital Library: EI
Published Online: January  2024
  41  15
Image
Pages 342-1 - 342-5,  © 2024, Society for Imaging Science and Technology 2024
Volume 36
Issue 4
Abstract

Advances in AI allow for fake image creation. These techniques can be used to fake mammograms. This could impact patient care and medicolegal cases. One method to verify that an image is original is to confirm the source of the image. A deep-learning algorithm(DeepMammo)-based on CNNs and FCNNs, used to identify the machine that created any mammogram. We analyze mammograms of 1574 patients obtained on 7-different mammography machines and randomly split the dataset by patient into training/validation(80%) and test(20%) datasets. DeepMammo has an accuracy of 98.09%, AUC of 95.96% in the test dataset.

Digital Library: EI
Published Online: January  2024

Keywords

[object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object]