A new algorithm for the detection of deepfakes in digital videos is presented. The I-frames were extracted in order to provide faster computation and analysis than approaches described in the literature. To identify the discriminating regions within individual video frames, the entire frame, background, face, eyes, nose, mouth, and face frame were analyzed separately. From the Discrete Cosine Transform (DCT), the β components were extracted from the AC coefficients and used as input to standard classifiers. Experimental results show that the eye and mouth regions are those most discriminative and able to determine the nature of the video under analysis.
In the past several years, generative adversarial networks have emerged that are capable of creating realistic synthetic images of human faces. Because these images can be used for malicious purposes, researchers have begun to develop techniques to synthetic images. Currently, the majority of existing techniques operate by searching for statistical traces introduced when an image is synthesized by a GAN. An alternative approach that has received comparatively less research involves using semantic inconsistencies detect synthetic images. While GAN-generated synthetic images appear visually realistic at first glance, they often contain subtle semantic inconsistencies such as inconsistent eye highlights, misaligned teeth, unrealistic hair textures, etc. In this paper, we propose a new approach to detect GAN-generated images of human faces by searching for semantic inconsistencies in multiple different facial features such as the eyes, mouth, and hair. Synthetic image detection decisions are made by fusing the outputs of these facial-feature-level detectors. Through a series of experiments, we demonstrate that this approach can yield strong synthetic image detection performance. Furthermore, we experimentally demonstrate that our approach is less susceptible to performance degradations caused by post-processing than CNN-based detectors utilize statistical traces.
Forensics research has developed several techniques to identify the model and manufacturer of a digital image or videos source camera. However, to the best of our knowledge, no work has been performed to identify the manufacturer and model of the scanner that captured an MRI image. MRI source identification can have several important applications ranging from scientific fraud discovery, exposing issues around anonymity and privacy of medical records, protecting against malicious tampering of medical images, and validating AI-based diagnostic techniques whose performance varies on different MRI scanners. In this paper, we propose a new CNN-based approach to learn forensic traces left by an MRI scanner and use these traces to identify the manufacturer and model of the scanner that captured an MRI image. Additionally, we identify an issue called weight divergence that can occur when training CNNs using a constrained convolutional layer and propose three new correction functions to protect against this. Our experimental results show we can identify an MRI scanners manufacturer with 97.88% accuracy and its model with 91.07% accuracy. Additionally, we show that our proposed correction functions can noticeably improve our CNNs accuracy when performing scanner model identification.