In the recent years, the detection of deepfake videos has become a major topic in the field of digital media forensics, as the amount of such videos circulating on the internet has drastically risen. Providers of content, such as Facebook and Amazon, have become aware of this new threat to spreading misinformation on the Internet. In this work, a novel forgery detection method based on the texture analysis known from image classification and segmentation is proposed. In the experimental results, its performance has shown to be comparable to related works.
Identifying the source of a video recording created by a camera or smartphone has been a common and challenging task in media forensics for many years. We present an approach for source identification on the very common MP4 file format. In extension to related works, we propose to consider the suitability of attribute field values and their respective order in the atom/box tree in a specific manner. The significance of a field attribute and its particular value for source identification will be reflected by means of up and down weighting during the training and the matching process. Experimental result indicate that our approach allows distinguishing major brands. Even device identification is possible for a subset of our training data.
In recent years, convolutional neural networks (CNNs) have been widely used by researchers to perform forensic tasks such as image tampering detection. At the same time, adversarial attacks have been developed that are capable of fooling CNN-based classifiers. Understanding the transferability of adversarial attacks, i.e. an attacks ability to attack a different CNN than the one it was trained against, has important implications for designing CNNs that are resistant to attacks. While attacks on object recognition CNNs are believed to be transferrable, recent work by Barni et al. has shown that attacks on forensic CNNs have difficulty transferring to other CNN architectures or CNNs trained using different datasets. In this paper, we demonstrate that adversarial attacks on forensic CNNs are even less transferrable than previously thought even between virtually identical CNN architectures! We show that several common adversarial attacks against CNNs trained to identify image manipulation fail to transfer to CNNs whose only difference is in the class definitions (i.e. the same CNN architectures trained using the same data). We note that all formulations of class definitions contain the unaltered class. This has important implications for the future design of forensic CNNs that are robust to adversarial and anti-forensic attacks.