Regular
Algorithmic artAuthenticationAudio signal classification
BarcodeBlank paper
Computational artConvolutional Neural Network (CNN)
Deepfake detectionDual tree complex wavelet transformDeep learningDigital WatermarkingDiscriminatorDeepfake
Encrypted Traffic AnalysisEuler video magnification
Frequency-domain analysisFake imagesFingerprintForensicsFalsified media
GANGAN image detectionGradient Based AttacksGenerator
H.264/H.265 encoding & decodingHaralick featuresHolistic image manipulation detection
Image attributionImage forensicsImage verificationImage splicing localizationImage tampering
Keystroke Biometrics
Linear pattern
MetadataMultimedia securityMultimedia forensicsMP4Machine Learning
Open Source Intelligence
PRNUPrivacy IssuesPhoto-response non-uniformity (PRNU)Pattern RecognitionPrivacy and ForensicsPatternPrivacy
Quick response (QR) codes
Reverse engineering of deceptionsRobustness EvaluationRate-distortion optimization (RDO)Rotation angle estimationRinted materialRobust Hashing
SecuritySpectrogramsSensor pattern noiseSteganographySemi-fragileSpoofing detectionSurveillanceSource identificationSignal rich art
Two-factor authentication
Voice authenticationVectorVideo motionVideo source verification
WatermarkingWiFi Capturing
2-D barcodes
 Filters
Month and year
 
  21  1
Image
Pages A04-1 - A04-7,  © Society for Imaging Science and Technology 2021
Digital Library: EI
Published Online: January  2021
  75  26
Image
Pages 271-1 - 271-7,  © Society for Imaging Science and Technology 2021
Volume 33
Issue 4

In the recent years, the detection of deepfake videos has become a major topic in the field of digital media forensics, as the amount of such videos circulating on the internet has drastically risen. Providers of content, such as Facebook and Amazon, have become aware of this new threat to spreading misinformation on the Internet. In this work, a novel forgery detection method based on the texture analysis known from image classification and segmentation is proposed. In the experimental results, its performance has shown to be comparable to related works.

Digital Library: EI
Published Online: January  2021
  70  26
Image
Pages 272-1 - 272-7,  © Society for Imaging Science and Technology 2021
Volume 33
Issue 4

Recent advances in artificial intelligence make it progressively hard to distinguish between genuine and counterfeit media, especially images and videos. One recent development is the rise of deepfake videos, based on manipulating videos using advanced machine learning techniques. This involves replacing the face of an individual from a source video with the face of a second person, in the destination video. This idea is becoming progressively refined as deepfakes are getting progressively seamless and simpler to compute. Combined with the outreach and speed of social media, deepfakes could easily fool individuals when depicting someone saying things that never happened and thus could persuade people in believing fictional scenarios, creating distress, and spreading fake news. In this paper, we examine a technique for possible identification of deepfake videos. We use Euler video magnification which applies spatial decomposition and temporal filtering on video data to highlight and magnify hidden features like skin pulsation and subtle motions. Our approach uses features extracted from the Euler technique to train three models to classify counterfeit and unaltered videos and compare the results with existing techniques.

Digital Library: EI
Published Online: January  2021
  66  13
Image
Pages 273-1 - 273-7,  © Society for Imaging Science and Technology 2021
Volume 33
Issue 4

Attackers may manipulate audio with the intent of presenting falsified reports, changing an opinion of a public figure, and winning influence and power. The prevalence of inauthentic multimedia continues to rise, so it is imperative to develop a set of tools that determines the legitimacy of media. We present a method that analyzes audio signals to determine whether they contain real human voices or fake human voices (i.e., voices generated by neural acoustic and waveform models). Instead of analyzing the audio signals directly, the proposed approach converts the audio signals into spectrogram images displaying frequency, intensity, and temporal content and evaluates them with a Convolutional Neural Network (CNN). Trained on both genuine human voice signals and synthesized voice signals, we show our approach achieves high accuracy on this classification task.

Digital Library: EI
Published Online: January  2021
  46  20
Image
Pages 274-1 - 274-6,  © Society for Imaging Science and Technology 2021
Volume 33
Issue 4

Nowadays, almost everyone owns a device capable of photography. More and more photos are taken and distributed through the Internet. In the times of analogue photography, pictures were considered to be legally accepted evidence, but today, due to the multitude of possibilities to manipulate digital pictures, this is not necessarily the case. Metadata can provide information about the origin of the image. The prerequisite for this is that they have not been altered. This work shows possibilities how metadata can be extracted and verified. The additional meta information of an image and its standards are of central importance. We introduce a method of comparing metadata with the visual image content. For this purpose, we apply machine learning for automatically classifying information from the image. Finally, an exemplary verification of the metadata by means of the weather is carried out to provide a practical example of how the presented approach works. Based on this example and on the presented concept, verifiers for metadata that verify several aspects can be created in the future. These verifiers can help to detect forged metadata in a forensic investigation.

Digital Library: EI
Published Online: January  2021
  51  6
Image
Pages 275-1 - 275-7,  © Society for Imaging Science and Technology 2021
Volume 33
Issue 4

During the image tampering, rotation is often involved to make the forgery more convincing. Hence, estimating the rotation angle accurately and locally is of forensic importance. Recently, a novel rotation angle estimation scheme was proposed based on linear pattern (LP), achieving state of the art performance especially in the case when the rotation angle is small. However, due to the limitations of the involved discrete wavelet transform (DWT), the existing LP based method cannot always accurately detect rotated linear pattern from a rotated image. To fill this gap, we propose to extract the rotated LP using dual tree complex wavelet transform (DTCWT). Thanks to the good directional selectivity of DTCWT, the proposed method can extract the LP more accurately. Experiments show that our proposed method performs better than the state of the art in rotation angle estimation, and is also a promising forensic tool for tampering localization.

Digital Library: EI
Published Online: January  2021
  170  9
Image
Pages 276-1 - 276-11,  © Society for Imaging Science and Technology 2021
Volume 33
Issue 4

Recent advances in Generative Adversarial Networks (GANs) have led to the creation of realistic-looking digital images that pose a major challenge to their detection by humans or computers. GANs are used in a wide range of tasks, from modifying small attributes of an image (StarGAN [14]), transferring attributes between image pairs (CycleGAN [92]), as well as generating entirely new images (ProGAN [37], StyleGAN [38], SPADE/GauGAN [65]). In this paper, we propose a novel approach to detect, attribute and localize GAN generated images that combines image features with deep learning methods. For every image, co-occurrence matrices are computed on neighborhood pixels of RGB channels in different directions (horizontal, vertical and diagonal). A deep learning network is then trained on these features to detect, attribute and localize these GAN generated/manipulated images. A large scale evaluation of our approach on 5 GAN datasets comprising over 2.76 million images (ProGAN, StarGAN, CycleGAN, StyleGAN and SPADE/GauGAN) shows promising results in detecting GAN generated images.

Digital Library: EI
Published Online: January  2021
  63  16
Image
Pages 277-1 - 277-7,  © Society for Imaging Science and Technology 2021
Volume 33
Issue 4

Digital image forensics aims to detect images that have been digitally manipulated. Realistic image forgeries involve a combination of splicing, resampling, region removal, smoothing and other manipulation methods. While most detection methods in literature focus on detecting a particular type of manipulation, it is challenging to identify doctored images that involve a host of manipulations. In this paper, we propose a novel approach to holistically detect tampered images using a combination of pixel co-occurrence matrices and deep learning. We extract horizontal and vertical co-occurrence matrices on three color channels in the pixel domain and train a model using a deep convolutional neural network (CNN) framework. Our method is agnostic to the type of manipulation and classifies an image as tampered or untampered. We train and validate our model on a dataset of more than 86,000 images. Experimental results show that our approach is promising and achieves more than 0.99 area under the curve (AUC) evaluation metric on the training and validation subsets. Further, our approach also generalizes well and achieves around 0.81 AUC on an unseen test dataset comprising more than 19,740 images released as part of the Media Forensics Challenge (MFC) 2020. Our score was highest among all other teams that participated in the challenge, at the time of announcement of the challenge results.

Digital Library: EI
Published Online: January  2021
  53  15
Image
Pages 298-1 - 298-7,  © Society for Imaging Science and Technology 2021
Volume 33
Issue 4

This work shows a fingerprint method for the unique identification of blank and printed paper by a smartphone. This allows a secure authentication by authorities or end users of products or documents. The digital file includes no hidden data. The fingerprint method uses uncontrollable printing variabilities and paper structure as features. The uncontrollable variabilities are mapped into a binary sequnce, which is used as representation of the features and acts as our fingerprint. The variabilities can be extracted from low and high quality paper as well as from printed material created with low-cost office printers and high-end offset printing machines. Based on this fingerprint, various applications can be realized where the distinction between original and copy or forgery is essential, such as piracy of packaging, tickets, coupons or official documents. From the results of the evaluation it can be concluded that the proposed method is independent of the smartphones used, the paper, the printing technology and the color temperature of the ambient light. Furthermore, the test results show that the proposed method works robustly at different distances, from the smartphone camera to the paper.

Digital Library: EI
Published Online: January  2021
  28  5
Image
Pages 299-1 - 299-9,  © Society for Imaging Science and Technology 2021
Volume 33
Issue 4

Signal rich art is an alternative paradigm for watermarking, in which we embed a signal in an image or software application such as a website or app as a visible artistic pattern. In this paper we present a new algorithm for generating signal carrying patterns from a dictionary of objects which we call object placement. In an alternative approach, called object position modulation, we locally perturb the positions of objects in a given pattern to embed the signal. We also present advances in previous techniques.

Digital Library: EI
Published Online: January  2021

Keywords

[object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object]